Skip to main content
Fix constructor order to match actual signature.
Source Link
mdfst13
  • 22.4k
  • 6
  • 34
  • 70

I realize that this is old, but I happened to see it today.

Since this does not specify , it's worth noting that Java already has a Least Recently Used cache data structure: LinkedHashMap. So you could just say

class LRUCache<K, V> extends LinkedHashMap<K, V> {

    protected final int LIMIT;

    public LRUCache(int capacity) {
        super(.75, 2 * capacity, .75, true);
        LIMIT = capacity;
    }

    @Override
    protected boolean removeEldestEntry() {
        return size() >= LIMIT;
    }

}

The true in the super constructor makes it last accessed order instead of last inserted. I doubled the capacity to make the map sparser. Hopefully that means fewer collisions.

Now you don't have to implement any additional logic for ProcessNode. It will delete old entries for you automatically. It has almost the same implementation as your original, with a few differences. If you really want, you could migrate your original behavior to this class. Or swap out the HashMap in your original for the LinkedHashMap.

class LRUCache {

    private final Map<Integer, Integer> processMap;

    public LRUCache(int capacity) {
        processMap = new LinkedHashMap<Integer, Integer>(capacity) {

            protected final int LIMIT;

            public LRUCache(int capacity) {
                super(.75, 2 * capacity, .75, true);
                LIMIT = capacity;
            }

            @Override
            protected boolean removeEldestEntry() {
                return size() >= LIMIT;
            }

        };
    }

    public int get(int key) {
        Integer value = processMap.get(key);

        if (value == null) {
            return -1;
        }

        return value;
    }

    public void put(int key, int value) {
        processMap.put(key, value);
    }

}

The remove method would also get easier but doesn't seem required by the problem statement. So you could actually leave it off.

I realize that this is old, but I happened to see it today.

Since this does not specify , it's worth noting that Java already has a Least Recently Used cache data structure: LinkedHashMap. So you could just say

class LRUCache<K, V> extends LinkedHashMap<K, V> {

    protected final int LIMIT;

    public LRUCache(int capacity) {
        super(.75, 2 * capacity, true);
        LIMIT = capacity;
    }

    @Override
    protected boolean removeEldestEntry() {
        return size() >= LIMIT;
    }

}

The true in the super constructor makes it last accessed order instead of last inserted. I doubled the capacity to make the map sparser. Hopefully that means fewer collisions.

Now you don't have to implement any additional logic for ProcessNode. It will delete old entries for you automatically. It has almost the same implementation as your original, with a few differences. If you really want, you could migrate your original behavior to this class. Or swap out the HashMap in your original for the LinkedHashMap.

class LRUCache {

    private final Map<Integer, Integer> processMap;

    public LRUCache(int capacity) {
        processMap = new LinkedHashMap<Integer, Integer>(capacity) {

            protected final int LIMIT;

            public LRUCache(int capacity) {
                super(.75, 2 * capacity, true);
                LIMIT = capacity;
            }

            @Override
            protected boolean removeEldestEntry() {
                return size() >= LIMIT;
            }

        };
    }

    public int get(int key) {
        Integer value = processMap.get(key);

        if (value == null) {
            return -1;
        }

        return value;
    }

    public void put(int key, int value) {
        processMap.put(key, value);
    }

}

The remove method would also get easier but doesn't seem required by the problem statement. So you could actually leave it off.

I realize that this is old, but I happened to see it today.

Since this does not specify , it's worth noting that Java already has a Least Recently Used cache data structure: LinkedHashMap. So you could just say

class LRUCache<K, V> extends LinkedHashMap<K, V> {

    protected final int LIMIT;

    public LRUCache(int capacity) {
        super(2 * capacity, .75, true);
        LIMIT = capacity;
    }

    @Override
    protected boolean removeEldestEntry() {
        return size() >= LIMIT;
    }

}

The true in the super constructor makes it last accessed order instead of last inserted. I doubled the capacity to make the map sparser. Hopefully that means fewer collisions.

Now you don't have to implement any additional logic for ProcessNode. It will delete old entries for you automatically. It has almost the same implementation as your original, with a few differences. If you really want, you could migrate your original behavior to this class. Or swap out the HashMap in your original for the LinkedHashMap.

class LRUCache {

    private final Map<Integer, Integer> processMap;

    public LRUCache(int capacity) {
        processMap = new LinkedHashMap<Integer, Integer>(capacity) {

            protected final int LIMIT;

            public LRUCache(int capacity) {
                super(2 * capacity, .75, true);
                LIMIT = capacity;
            }

            @Override
            protected boolean removeEldestEntry() {
                return size() >= LIMIT;
            }

        };
    }

    public int get(int key) {
        Integer value = processMap.get(key);

        if (value == null) {
            return -1;
        }

        return value;
    }

    public void put(int key, int value) {
        processMap.put(key, value);
    }

}

The remove method would also get easier but doesn't seem required by the problem statement. So you could actually leave it off.

Source Link
mdfst13
  • 22.4k
  • 6
  • 34
  • 70

I realize that this is old, but I happened to see it today.

Since this does not specify , it's worth noting that Java already has a Least Recently Used cache data structure: LinkedHashMap. So you could just say

class LRUCache<K, V> extends LinkedHashMap<K, V> {

    protected final int LIMIT;

    public LRUCache(int capacity) {
        super(.75, 2 * capacity, true);
        LIMIT = capacity;
    }

    @Override
    protected boolean removeEldestEntry() {
        return size() >= LIMIT;
    }

}

The true in the super constructor makes it last accessed order instead of last inserted. I doubled the capacity to make the map sparser. Hopefully that means fewer collisions.

Now you don't have to implement any additional logic for ProcessNode. It will delete old entries for you automatically. It has almost the same implementation as your original, with a few differences. If you really want, you could migrate your original behavior to this class. Or swap out the HashMap in your original for the LinkedHashMap.

class LRUCache {

    private final Map<Integer, Integer> processMap;

    public LRUCache(int capacity) {
        processMap = new LinkedHashMap<Integer, Integer>(capacity) {

            protected final int LIMIT;

            public LRUCache(int capacity) {
                super(.75, 2 * capacity, true);
                LIMIT = capacity;
            }

            @Override
            protected boolean removeEldestEntry() {
                return size() >= LIMIT;
            }

        };
    }

    public int get(int key) {
        Integer value = processMap.get(key);

        if (value == null) {
            return -1;
        }

        return value;
    }

    public void put(int key, int value) {
        processMap.put(key, value);
    }

}

The remove method would also get easier but doesn't seem required by the problem statement. So you could actually leave it off.