FigureShapes are used for objects having geometries more complex than rectangles. Detecting a click on a rectangle is easy. I want to detect a click on the filled part of a FigureShape.
I’m using only closed filled shapes made up of straight lines between points, and I’ve extended Object2D with Height and Width properties for FigureShape, so for any such shape I have a box within which to look for mouse events when the object is drawn on a canvas. So far so good.
I can check the color of a pixel clicked on using an RGBSurface, but it’s really inefficient and unreliable. It doesn’t work at all for many shades of grey.
Public Function PixelMatchesColor(c as color) as Boolean
// a rather inefficient way to see what color pixel is being clicked on
' which does not work with some shades of grey
dim p as new picture( me.Width, me.Height )
me.Drawinto( p.graphics, 0, 0 )
dim rgbs as RGBSurface = p.RGBSurface
return ( rgbs.Pixel( lastMouseX, lastMouseY ) = c )
End Function
When many FigureShapes are drawn in puzzle-like configurations, the idea of looking inside rectangles for a clicked pixel makes almost no sense. What I need from the FigureShape is sprite-like behaviour. Whatever is filled is clickable. Whatever is transparent is not clickable.
I know there are formulas for calculating whether a point is inside of a polygon. Is there something similar for arbitrary complex 2D geometries? Or other ideas? This seems like a pretty basic 2D graphics problem that other people must have solved eons ago, right?
One thing I thought of is for each FigureShape to hold an array of non-overlapping rectangles corresponding to the inside boundaries of the shape, like a low-res raster image of the shape. When a FigureShape is created, the array would be created. Those rectangles could then be searched in the normal way. Seems reasonable enough, but it would be very slow and I think not scalable for thousands of shapes. I need to be able to track up to 2048 of these shapes.
Unless you use maths, the easiest method and the one you allude to is creating an off screen image for hit detection.
If you create your live image with 3 overlapping pentagons with pretty graphical effects, shading etc then you create the same image on your off screen hit detection image. The 3 pentagons would have a unique colour that you could check for making the total possible number of uniquely detectable objects 256256256 (16’777’216 or there abouts) just using 8bit RGB.
If you combine this with render clipping where you only update the two images where something actually changes rather than repainting the whole image then there will be very little performance impact.
I don’t know if you need to go the separate image route for each poly as you will only be returning the top most hit poly, right?
I know others on the forum have successfully used this method for custom canvas based hit detection, someone might post up some code.
As long as you render rear to front then the mouse will always hit the top more poly that the user can see.
Your offscreen image uses a single predefined image for each poly. They don’t have to be the same as the onscreen image. You should be able to define a series of “safe” colors.
I don’t know why it doesn’t work, but I know that pixel color info at lower levels in Mac need to be translated according to color profiling, and I guess it may have to do with that.
I may be too dense to follow this. A color that is clicked on is compared with the color of the FigureShape. If I compare the color clicked on with some different color than the one displayed for the shape, I’ll get a different result, right?
No pixels were hurt during the hit detection i.e. an off screen image shouldn’t and isn’t affected by colour profiling, tested on my mac and the triangles came back as 1,2,3 as expected.
Thanks for the sample project, . Now I understand what Tim Hare was saying. As long as there is a 1:1 relation between on-screen and off-screen colors, it makes sense.
But, I’m still not sure this does what I need. It won’t work when differently shaped objects having the same colors overlap.
I’m translating some code which checks for line intersection, to see how well a ray casting algorithm might work instead. I think it’s what I need.
Objects on the hit picture never have the same colour. Every object has its own unique colour on the hit picture so you can use the colour as a lookup back to your original object.
You can literally have a million different objects on the hit picture and the hit detection speed will be exactly the same as having one object because every object you place on the hit picture has its own unique colour that you can lookup.