Skip to main content
added 426 characters in body
Source Link
G. Sliepen
  • 69.5k
  • 3
  • 75
  • 180

Allow capacity to be zero

Instead of disallowing the capacity to be zero, just handle it. It's a simple change in the check for a full cache:

…
else if (m_lookup.size() == m_capacity) {
    if (m_capacity == 0) [[unlikely]] {
        return;
    } else {
        m_lookup.erase(m_cache.back());
        m_cache.pop_back();
    }
}

Allow capacity to be zero

Instead of disallowing the capacity to be zero, just handle it. It's a simple change in the check for a full cache:

…
else if (m_lookup.size() == m_capacity) {
    if (m_capacity == 0) [[unlikely]] {
        return;
    } else {
        m_lookup.erase(m_cache.back());
        m_cache.pop_back();
    }
}
added 426 characters in body
Source Link
G. Sliepen
  • 69.5k
  • 3
  • 75
  • 180

Copy the interface of std::unordered_map

An LRU cache is basically an unordered map, with the only differences being a limited capacity and a replacement policy when trying to add more elements than its capacity. So I would recommend you implement the exact same interface that std::unordered_map has. This will have some benefits:

  • C++ programmers are already familiar with this interface
  • It will be a drop-in replacement for std::unordered_map
  • You can use lots of other standard library features that work with unordered maps on your LRU

To get access to the elements in the LRU order, I would make sure begin(), end() and related functions use the underlying std::list, and you could add front() and back() as well.

So then your API will become:

  • insert(const std::pair<Key, Value>&), emplace(auto&&... args), try_emplace(const Key&, auto&&... args) etc. instead of insertKeyValuePair()
  • at(const Key&), operator[](const Key&) and find(const Key&) instead of getValueFromKey()
  • *begin() and front() instead of getMostRecentKey()

Efficiency

With std::unordered_map and std::list you have \$O(1)\$ time complexity, but there can be quite a bit of overhead because every node in the map and the list will be allocated separately. It would be more efficient if you could just allocate one block of memory for the map and the list. You could implement something like std::flat_map but augmented to maintain the priority list as well.

Using std::optional

Unfortunately std::optional was very late to the game, so the standard containers have used different ways of signaling whether an element could be retrieved or not. However, if you do want that, I suggest just creating a stand-alone function that gives you the desired interface but which can work on any container, for example:

template<typename Container, typename Key>
std::optional<Container::mapped_type> get(const Container& container, const Key& key) {
    if (auto it = container.find(key); it != container.end()) {
        return it->second;
    } else {
        return std::nullopt;
    }
}

Pick one version of C++

I’m implementing a templated LRU cache in C++17 and C++20.

That doesn't make much sense. You want to pick one version of C++ and stick with that. Either you pick a very recent version because you get a lot of nice features, or you pick an older version to allow for more compatibility. For libraries that are potentially going to be used by many other projects, I would choose the latter.

Copy the interface of std::unordered_map

An LRU cache is basically an unordered map, with the only differences being a limited capacity and a replacement policy when trying to add more elements than its capacity. So I would recommend you implement the exact same interface that std::unordered_map has. This will have some benefits:

  • C++ programmers are already familiar with this interface
  • It will be a drop-in replacement for std::unordered_map
  • You can use lots of other standard library features that work with unordered maps on your LRU

To get access to the elements in the LRU order, I would make sure begin(), end() and related functions use the underlying std::list, and you could add front() and back() as well.

So then your API will become:

  • insert(const std::pair<Key, Value>&), emplace(auto&&... args), try_emplace(const Key&, auto&&... args) etc. instead of insertKeyValuePair()
  • at(const Key&), operator[](const Key&) and find(const Key&) instead of getValueFromKey()
  • *begin() and front() instead of getMostRecentKey()

Efficiency

With std::unordered_map and std::list you have \$O(1)\$ time complexity, but there can be quite a bit of overhead because every node in the map and the list will be allocated separately. It would be more efficient if you could just allocate one block of memory for the map and the list. You could implement something like std::flat_map but augmented to maintain the priority list as well.

Using std::optional

Unfortunately std::optional was very late to the game, so the standard containers have used different ways of signaling whether an element could be retrieved or not. However, if you do want that, I suggest just creating a stand-alone function that gives you the desired interface but which can work on any container, for example:

template<typename Container, typename Key>
std::optional<Container::mapped_type> get(const Container& container, const Key& key) {
    if (auto it = container.find(key); it != container.end()) {
        return it->second;
    } else {
        return std::nullopt;
    }
}

Copy the interface of std::unordered_map

An LRU cache is basically an unordered map, with the only differences being a limited capacity and a replacement policy when trying to add more elements than its capacity. So I would recommend you implement the exact same interface that std::unordered_map has. This will have some benefits:

  • C++ programmers are already familiar with this interface
  • It will be a drop-in replacement for std::unordered_map
  • You can use lots of other standard library features that work with unordered maps on your LRU

To get access to the elements in the LRU order, I would make sure begin(), end() and related functions use the underlying std::list, and you could add front() and back() as well.

So then your API will become:

  • insert(const std::pair<Key, Value>&), emplace(auto&&... args), try_emplace(const Key&, auto&&... args) etc. instead of insertKeyValuePair()
  • at(const Key&), operator[](const Key&) and find(const Key&) instead of getValueFromKey()
  • *begin() and front() instead of getMostRecentKey()

Efficiency

With std::unordered_map and std::list you have \$O(1)\$ time complexity, but there can be quite a bit of overhead because every node in the map and the list will be allocated separately. It would be more efficient if you could just allocate one block of memory for the map and the list. You could implement something like std::flat_map but augmented to maintain the priority list as well.

Using std::optional

Unfortunately std::optional was very late to the game, so the standard containers have used different ways of signaling whether an element could be retrieved or not. However, if you do want that, I suggest just creating a stand-alone function that gives you the desired interface but which can work on any container, for example:

template<typename Container, typename Key>
std::optional<Container::mapped_type> get(const Container& container, const Key& key) {
    if (auto it = container.find(key); it != container.end()) {
        return it->second;
    } else {
        return std::nullopt;
    }
}

Pick one version of C++

I’m implementing a templated LRU cache in C++17 and C++20.

That doesn't make much sense. You want to pick one version of C++ and stick with that. Either you pick a very recent version because you get a lot of nice features, or you pick an older version to allow for more compatibility. For libraries that are potentially going to be used by many other projects, I would choose the latter.

Source Link
G. Sliepen
  • 69.5k
  • 3
  • 75
  • 180

Copy the interface of std::unordered_map

An LRU cache is basically an unordered map, with the only differences being a limited capacity and a replacement policy when trying to add more elements than its capacity. So I would recommend you implement the exact same interface that std::unordered_map has. This will have some benefits:

  • C++ programmers are already familiar with this interface
  • It will be a drop-in replacement for std::unordered_map
  • You can use lots of other standard library features that work with unordered maps on your LRU

To get access to the elements in the LRU order, I would make sure begin(), end() and related functions use the underlying std::list, and you could add front() and back() as well.

So then your API will become:

  • insert(const std::pair<Key, Value>&), emplace(auto&&... args), try_emplace(const Key&, auto&&... args) etc. instead of insertKeyValuePair()
  • at(const Key&), operator[](const Key&) and find(const Key&) instead of getValueFromKey()
  • *begin() and front() instead of getMostRecentKey()

Efficiency

With std::unordered_map and std::list you have \$O(1)\$ time complexity, but there can be quite a bit of overhead because every node in the map and the list will be allocated separately. It would be more efficient if you could just allocate one block of memory for the map and the list. You could implement something like std::flat_map but augmented to maintain the priority list as well.

Using std::optional

Unfortunately std::optional was very late to the game, so the standard containers have used different ways of signaling whether an element could be retrieved or not. However, if you do want that, I suggest just creating a stand-alone function that gives you the desired interface but which can work on any container, for example:

template<typename Container, typename Key>
std::optional<Container::mapped_type> get(const Container& container, const Key& key) {
    if (auto it = container.find(key); it != container.end()) {
        return it->second;
    } else {
        return std::nullopt;
    }
}