Skip to main content
added 24 characters in body
Source Link
user204677
user204677

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. BVHs are also interesting as spatial indexing structures which don't evenly subdivide their nodes. For coincident elements against tree structures, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf when determining when the leaf should subdivide). A Kd tree might also be useful if you anticipate inputs with a lot of coincident elements, since you only need to consider one dimension when determining whether a node should median split.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. BVHs are also interesting as spatial indexing structures which don't evenly subdivide their nodes. For coincident elements, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf).

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. BVHs are also interesting as spatial indexing structures which don't evenly subdivide their nodes. For coincident elements against tree structures, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf when determining when the leaf should subdivide). A Kd tree might also be useful if you anticipate inputs with a lot of coincident elements, since you only need to consider one dimension when determining whether a node should median split.

added 236 characters in body
Source Link
user204677
user204677

But the only thing to finesse with a grid to make it as optimal as it can be against a certain input is cell size, which doesn't leave you with too much to think about and fiddle with, and that's why it's my go-to data structure for spatial indexing problems until I find reasons not to use it. It's dirt simple to implement and doesn't require you to fiddle with anything more than a single runtime input. 

You can get a lot out of a plain old grid and I've actually beaten a lot of quad-tree and kd tree implementations used in commercial software by replacing them with a plain old grid (though they weren't necessarily the best implemented ones, but the authors spent a whole lot more time than the 20 mins I spent to whip up a grid). Here's a quick little thing I whipped up to answer a question elsewhere using a grid for collision detection (not even really optimized, just a few hours of work, and I had to spend most of the time learning how the pathfinding works to answer the question and it was also my first time implementing collision-detection of this sort):

Another weakness of grids (but they are general weaknesses for many spatial indexing structures) is that if you insert a lot of coincident or overlapping elements, like many points with the same position, they are going to be inserted into the exact same cell(s) and degrade performance when traversing that cell. Similarly if you insert a lot of massive elements that are far, far bigger than the cell size, they'll want to be inserted into a boatload of cells and use lots and lots of memory and degrade the time required for spatial queries across the board.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. BVHs are also interesting as spatial indexing structures which don't evenly subdivide their nodes. For coincident elements, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf).

But the only thing to finesse with a grid to make it as optimal as it can be against a certain input is cell size, which doesn't leave you with too much to think about and fiddle with, and that's why it's my go-to data structure for spatial indexing problems until I find reasons not to use it. It's dirt simple to implement and doesn't require you to fiddle with anything more than a single runtime input. You can get a lot out of a plain old grid. Here's a quick little thing I whipped up to answer a question elsewhere using a grid for collision detection (not even really optimized, just a few hours of work, and I had to spend most of the time learning how the pathfinding works to answer the question and it was also my first time implementing collision-detection of this sort):

Another weakness is that if you insert a lot of coincident or overlapping elements, like many points with the same position, they are going to be inserted into the exact same cell(s) and degrade performance when traversing that cell. Similarly if you insert a lot of massive elements that are far, far bigger than the cell size, they'll want to be inserted into a boatload of cells and use lots and lots of memory and degrade the time required for spatial queries across the board.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. For coincident elements, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf).

But the only thing to finesse with a grid to make it as optimal as it can be against a certain input is cell size, which doesn't leave you with too much to think about and fiddle with, and that's why it's my go-to data structure for spatial indexing problems until I find reasons not to use it. It's dirt simple to implement and doesn't require you to fiddle with anything more than a single runtime input. 

You can get a lot out of a plain old grid and I've actually beaten a lot of quad-tree and kd tree implementations used in commercial software by replacing them with a plain old grid (though they weren't necessarily the best implemented ones, but the authors spent a whole lot more time than the 20 mins I spent to whip up a grid). Here's a quick little thing I whipped up to answer a question elsewhere using a grid for collision detection (not even really optimized, just a few hours of work, and I had to spend most of the time learning how the pathfinding works to answer the question and it was also my first time implementing collision-detection of this sort):

Another weakness of grids (but they are general weaknesses for many spatial indexing structures) is that if you insert a lot of coincident or overlapping elements, like many points with the same position, they are going to be inserted into the exact same cell(s) and degrade performance when traversing that cell. Similarly if you insert a lot of massive elements that are far, far bigger than the cell size, they'll want to be inserted into a boatload of cells and use lots and lots of memory and degrade the time required for spatial queries across the board.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. BVHs are also interesting as spatial indexing structures which don't evenly subdivide their nodes. For coincident elements, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf).

added 236 characters in body
Source Link
user204677
user204677

When you're dealing with spatial indexing problems, I actually recommend starting off with a spatial hash or my personal favorite: the plain old grid.

enter image description here

... and understand its weaknesses first before moving to tree structures that allow sparse representations.

One of the obvious weaknesses is that you could be wasting memory on a lot of empty cells (though a decently-implemented grid shouldn't require more than 32-bits per cell unless you actually have billions of nodes to insert). Another is that if you have moderate-sized elements which are larger than the size of a cell and often span, say, dozens of cells, you can waste a lot of memory inserting those medium-sized elements to far more cells than ideal. Likewise when you do spatial queries, you might have to check more cells, sometimes far more, than ideal.

But the only thing to finesse with a grid to make it as optimal as it can be against a certain input is cell size, which doesn't leave you with too much to think about and fiddle with, and that's why it's my go-to data structure for spatial indexing problems until I find reasons not to use it. It's dirt simple to implement and doesn't require you to fiddle with anything more than a single runtime input. You can get a lot out of a plain old grid. Here's a quick little thing I whipped up to answer a question elsewhere using a grid for collision detection (not even really optimized, just a few hours of work, and I had to spend most of the time learning how the pathfinding works to answer the question and it was also my first time implementing collision-detection of this sort):

enter image description here

Another weakness is that if you insert a lot of coincident or overlapping elements, like many points with the same position, they are going to be inserted into the exact same cell(s) and degrade performance when traversing that cell. Similarly if you insert a lot of massive elements that are far, far bigger than the cell size, they'll want to be inserted into a boatload of cells and use lots and lots of memory and degrade the time required for spatial queries across the board.

However, these two immediate problems above with coincident and massive elements are actually problematic for all spatial indexing structures. The plain old grid actually handles these pathological cases a little bit better than many others since it at least doesn't want to recursively subdivide cells over and over.

When you start with the grid and work your way towards something like a quad-tree or KD-tree, then the main problem you're wanting to solve is the problem with elements being inserted to too many cells, having too many cells, and/or having to check too many cells with this type of dense representation.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. For coincident elements, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf).

When you're dealing with spatial indexing problems, I actually recommend starting off with a spatial hash or my personal favorite: the plain old grid.

enter image description here

... and understand its weaknesses first before moving to tree structures that allow sparse representations.

One of the obvious weaknesses is that you could be wasting memory on a lot of empty cells (though a decently-implemented grid shouldn't require more than 32-bits per cell unless you actually have billions of nodes to insert). Another is that if you have moderate-sized elements which are larger than the size of a cell and often span, say, dozens of cells, you can waste a lot of memory inserting those medium-sized elements to far more cells than ideal. Likewise when you do spatial queries, you might have to check more cells, sometimes far more, than ideal.

But the only thing to finesse with a grid to make it as optimal as it can be against a certain input is cell size, which doesn't leave you with too much to think about and fiddle with, and that's why it's my go-to data structure for spatial indexing problems until I find reasons not to use it. It's dirt simple to implement and doesn't require you to fiddle with anything more than a single runtime input. You can get a lot out of a plain old grid. Here's a quick little thing I whipped up to answer a question elsewhere using a grid for collision detection (not even really optimized, just a few hours of work, and I had to spend most of the time learning how the pathfinding works to answer the question and it was also my first time implementing collision-detection of this sort):

enter image description here

Another weakness is that if you insert a lot of coincident or overlapping elements, like many points with the same position, they are going to be inserted into the exact same cell(s) and degrade performance when traversing that cell. Similarly if you insert a lot of massive elements that are far, far bigger than the cell size, they'll want to be inserted into a boatload of cells and use lots and lots of memory and degrade the time required for spatial queries across the board.

However, these two immediate problems above with coincident and massive elements are actually problematic for all spatial indexing structures. The plain old grid actually handles these pathological cases a little bit better than many others since it at least doesn't want to recursively subdivide cells over and over.

When you start with the grid and work your way towards something like a quad-tree or KD-tree, then the main problem you're wanting to solve is the problem with elements being inserted to too many cells, having too many cells, and/or having to check too many cells with this type of dense representation.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap.

When you're dealing with spatial indexing problems, I actually recommend starting off with a spatial hash or my personal favorite: the plain old grid.

enter image description here

... and understand its weaknesses first before moving to tree structures that allow sparse representations.

One of the obvious weaknesses is that you could be wasting memory on a lot of empty cells (though a decently-implemented grid shouldn't require more than 32-bits per cell unless you actually have billions of nodes to insert). Another is that if you have moderate-sized elements which are larger than the size of a cell and often span, say, dozens of cells, you can waste a lot of memory inserting those medium-sized elements to far more cells than ideal. Likewise when you do spatial queries, you might have to check more cells, sometimes far more, than ideal.

But the only thing to finesse with a grid to make it as optimal as it can be against a certain input is cell size, which doesn't leave you with too much to think about and fiddle with, and that's why it's my go-to data structure for spatial indexing problems until I find reasons not to use it. It's dirt simple to implement and doesn't require you to fiddle with anything more than a single runtime input. You can get a lot out of a plain old grid. Here's a quick little thing I whipped up to answer a question elsewhere using a grid for collision detection (not even really optimized, just a few hours of work, and I had to spend most of the time learning how the pathfinding works to answer the question and it was also my first time implementing collision-detection of this sort):

enter image description here

Another weakness is that if you insert a lot of coincident or overlapping elements, like many points with the same position, they are going to be inserted into the exact same cell(s) and degrade performance when traversing that cell. Similarly if you insert a lot of massive elements that are far, far bigger than the cell size, they'll want to be inserted into a boatload of cells and use lots and lots of memory and degrade the time required for spatial queries across the board.

However, these two immediate problems above with coincident and massive elements are actually problematic for all spatial indexing structures. The plain old grid actually handles these pathological cases a little bit better than many others since it at least doesn't want to recursively subdivide cells over and over.

When you start with the grid and work your way towards something like a quad-tree or KD-tree, then the main problem you're wanting to solve is the problem with elements being inserted to too many cells, having too many cells, and/or having to check too many cells with this type of dense representation.

But if you think of a quad-tree as an optimization over a grid for specific use cases, then it does help to still think of the idea of a "minimum cell size", to limit the depth of the recursive subdivision of the quad-tree nodes. When you do that, the worst-case scenario of the quad-tree will still degrade into the dense grid at the leaves, only less efficient than the grid since it'll require logarithmic time to work your way from root to grid cell instead of constant-time. Yet thinking of that minimum cell size will avoid the infinite loop/recursion scenario. For massive elements there are also some alternative variants like loose quad-trees which don't necessarily split evenly and could have AABBs for child nodes that overlap. For coincident elements, the main thing is to just impose a limit to the subdivision (or as others suggested, just reject them, or find a way to treat them as though they aren't contributing to the unique number of elements in a leaf).

added 394 characters in body
Source Link
user204677
user204677
Loading
added 172 characters in body
Source Link
user204677
user204677
Loading
Source Link
user204677
user204677
Loading