Coding Math: Episode 21 – Bitmap Collision Detection

Coding Math: Episode 21 – Bitmap Collision Detection


This is Coding Math Episode 21, Bitmap Collision
Detection. A while back, we covered various forms of
collision detection, and I believe I made a promise to cover collision detection with
bitmaps at some point in the future. Well, the future is now. First of all, what exactly is a bitmap? At
its most basic, it’s a rectangular grid of values that are used to set the colors of
pixels in a rectangular portion of the screen. Point 0, 0 in that grid would be the top left
pixel and the point represented by bitmap.width – 1, bitmap.height – 1 would be the bottom
right pixel. Every pixel in the bitmap is addressable by an x, y coordinate and in many
systems you can get the pixel value at that location with a function usually called something
like getPixel(x, y) and set its value with a function like setPixel(x, y, color). Generally,
the pixel value can be broken down into red, green, blue and alpha channels. If only Canvas were so simple. But we’ll get
to that shortly. The simplest strategy in bitmap collision
detection is to take an empty, transparent bitmap and then draw a shape in it. Now, all
the pixels in that bitmap will have an alpha channel value of 0, except those where the
shape is drawn. Now say you have a particle moving along here. It’s currently at a position
of some x, y and you want to know if it hit this object you’ve drawn. Well, you get the
pixel value at that x, y point, and if the alpha value there is greater than zero, your
particle has collided with the shape. Now of course, that’s only really useful for
point-to-shape collisions. But it’s something to get started with, anyway. Now, the problem in HTML5 is that Canvas doesn’t
have a simple getPixel or setPixel function. What you have to do is call getImageData on
the context. This returns an ImageData object. This imageData object has three properties:
width, height and data. Width and height are obvious, and data is a special type of Array
containing the pixel values. As an important note, you should know that
the imageData you get from a context is fixed. If you draw something else to the canvas after
getting that image data, clear it, or in any other way change that bitmap, the imageData
you have will no longer be a current representation of what you see on the canvas. You’ll have
to call getImageData again to get the updated pixel data from the bitmap. Now you might guess that this data property
that holds the pixel values is a 2 dimensional array. And you might expect that you could
say something like imageData.data[x][y] to get the pixel value at location x, y. But
that would be wishful thinking. The data property is actually a one-dimensional
array and it goes something like this. Element 0 is the red value of pixel 0, 0.
Element 1 is the green value of pixel 0, 0. Element 2 is the blue value of that pixel
and element 3 is the alpha value. These are all in the range of 0 to 255. Then, element 4 is the red value of next pixel,
located at x=1, y=0 and so on. So you have a one dimensional array with four elements
for every pixel. Therefore, if you want to know the color value at any particular x,
y position, you’ve got some math to do. If each pixel took up one element in the array,
you would find the index of a particular x, y pixel by saying: index=imageData.width * y + x So if your bitmap was 100 pixels wide and
x was 10 and y was 5, you’d have 100 * 5 which is 500, plus x which is 10, so you’d be at
index 510 in the one dimensional array. But since each pixel takes up four elements,
you have to multiply all that by four. So… index=((imageData.width * y) + x) * 4 So in the above example, you’d be at index
2040. That index would point to the red value of the pixel at 10, 5. Then, you can get the other channel values
using offsets to that index. red=imageData.data[index]
green=imageData.data[index + 1] blue=imageData.data[index + 2] and alpha=imageData.data[index + 3] We could make a getPixel function that encapsulated
all that, but let’s hold off for a moment. First let’s take a closer look at the syntax
for getting image data from a context. It turns out that you don’t have to get all of
the image data for the entire canvas. When you call getImageData, you pass in the x,
y, width and height coordinates of a rectangle that you’d like to get the image data for.
If you passed in 0, 0, canvas.width, canvas.height, yes, you’d get all the image data for the
whole context. But you can get a sub-rectangle of the canvas as well. Now it turns out that setting and getting
imageData from a canvas’s context is pretty expensive in terms processing time. And the
more data you set or get, the longer it takes. So you should really only get what you need.
And only when you need it. So, we could go ahead and get the image data for a single
pixel by saying imageData=context.getImageData(x, y, 1,
1); That defines a one-pixel rectangle at the
specified x, y. Then, our red, green, blue and alpha channels
for that pixel would be in imageData.data at indexes 0, 1, 2 and 3. That’s certainly a lot simpler and only involves
getting the data for a single pixel, rather than all the pixels. But remember the other
part of what I just said, you should only get image data when you need it. Using this
method, you’d have to call getImageData every time you wanted to check a pixel. If you know
that the bitmap is not going to change, it might be more efficient to get the image data
for the whole thing one time, right up front, and keep it around and do your hit testing
on that. But if there’s any chance that the bitmap
could change, then you’d need to call getImageData each time anyway to make sure you had the
latest data. So you’d be better off just getting the single pixel each time. These are just things to consider when you
use this. I’m going to go with the single pixel method for the next example. Anyway, enough theory, let’s get coding and
see how you can use all this to do collision detection. I’m going to start with the usual template.
But I’m going to go into the HTML and add a second canvas called target. I’m also going
to add a few styles to the canvas CSS. I’ll set it to have position absolute and top and
left both 0 px. This will put both canvases in the same position, one completely overlaying
the other. I’m also going to add a background image to the page, just to make it a bit more
dramatic. In the main.js file I’ll get that target canvas
and its context, storing those in variables targetCanvas and targetContext. And I’ll make
sure targetCanvas is the same size as the original canvas. Then I’ll make a particle, set it on the left
edge of the screen, and give it a velocity of 10 pixels per frame at an angle of zero. Next, I’m going to draw a large circle in
the center of the screen. Note that I’m drawing this on the targetContext. If we run this
now, this is what we’ll see… So a picture of the earth with a big black
circle in front of it. This circle is the bitmap that we will be doing our hit testing
on. The other canvas we’ll use for displaying the particle. This lets us isolate the bitmap
that we’re doing the hit testing on, so we don’t have to worry about the pixels that
are used to draw the particle itself and anything else we might want to draw, like an explosion,
other objects, effects, ui, etc. We’ll only be hit testing against that circle. To get things moving, I’ll call an update
function… and then I’ll create that function. This is first going to clear the main canvas.
It will then update the particle and draw it on the main canvas. Then it’s going to
call getImageData on the target context, but using the particle’s x and y. It’s just going
to get that single pixel. Then we’re going to check if imageData.data
index 3 is greater than zero. Again, index 3 here will be the alpha channel of that pixel.
If it’s completely transparent, as most of the pixels will be, that will be zero. If
it’s not transparent, it will be something greater than 0. So if we get into this if
statement, we know that the particle has hit the circle. Now, what we do next is a pretty neat trick.
I want to make it look like the particle has blown a hole out of that big circle. To do
that, I’ll draw a 20 pixel radius circle at that point on the target context. But first
I’m going to set globalCompositionOperation to “destination-out”. GlobalCompositionOperation
affects how new drawing operations interact with existing content in the canvas. The default
that you’re used to is to draw the new shape right on top of whatever is there, but there
are many more options. I suggest you do a search on that one, because there are some
pretty useful and generally not-so-well-known things there. The destination-out operation
results in the shape you are drawing being chopped out of any existing content, like
a cookie cutter. It basically acts like an eraser, clearing all the pixels in the shape
you drew, rather than setting them. We’re drawing a arc, so the result is a hole in
that target circle right where we found a collision. If we do get a collision, I’ll call resetParticle.
This will put the particle back on the left edge and give it a somewhat randomized heading. And, if the particle goes off the right edge
of the screen, I’ll reset it as well. Other than that, all that’s left to do is
call requestAnimationFrame, passing in update, so it continues to be called. Let’s see what happens! Well, we have our circle in the center and
the particle shoots out from the left and hits it, leaving a big gaping hole. Next particle
comes out, does the same thing. Notice that as time goes on, big chunks are getting torn
out of the circle. Now if we were just going to deal with an
unbroken circle, we could use the circle point collision function we created a few weeks
back. But here, before long, we have a very irregular shape that it would be extremely
difficult to do collision detection on in any way other than using the image data. Of course, different systems and languages
will have different capabilities. Flash, for example has a really amazing bitmap hit test
system. You can even test two irregular shapes in separate bitmaps against each other. If
any non-transparent pixel from one touches any non-transparent pixel in another, you’ll
get a hit. Canvas has some cool features coming soon
that will help with more complex hit testing. In fact I read that some of them have even
been released in some version of chrome this past week, but it will be a while before they
are widely available in all browsers and platforms. In the meantime, I’m sure you can make lots
of use out of this little technique. See you next week.

12 thoughts to “Coding Math: Episode 21 – Bitmap Collision Detection”

  1. 3:40 Why on Dogs green lawn is it setup like this, is it faster in anyway?

    12:08 a particle passes through some of the bitmap here.(I understand its because of the frame-rate, that the particle is just skipping over it, I'm just pointing it out.)

    How bad a performance hit it be if you take out an array of pixels corresponding to those your particle passed through since the last frame(array of pixels as a line between those points), then just ran a loop over those with the collision/Alpha detection and broke out of the loop registering a hit only on the first pixel the particle would have hit. 

  2. I hope he covers something about how to handle the particle positions when collision is detected, to allow them to move around or figure out which direction to bounce away from the collision bitmap.

  3. Is it possible to do this without call getImageData ? For example, if I want to test the collisions between a point and a rotated rectangle (with context.rotate()) ?

Leave a Reply

Your email address will not be published. Required fields are marked *