Vision Tracking

We tested RoboRealm today and it works fairly well within half a foot accuracy.

Vikram , one of our DiscoBots Alumni now at Berklee wrote his thoughts on vision tracking !

I was just took a quick look at team 341's vision code from last year and I thought I'd write up my initial thoughts on using vision data and some other ramblings on autonomous. Sorry this turned out to be a little longer than I intended...

Autonomous
Considering it's a flat field I'd say the combination of sonars and gyros are the easiest way to get data. Based on our experience in 2011 (which granted made more sense with the holonomic drive) I'd say this is one of the more straightforward ways to get position data. Although this will probably be more difficult to implement without the holonomic drive you can at least get field position and orientation. The maxbotix sonars we used in 2011 are great since they'll just give you a distance value. Especially with two mounted at the front and two on a side you can align to a wall.

Vision Data
All that said, camera data is also an option (more difficult but more fun and certainly cooler in my opinion). You can get both the required orientation and distance from a camera. Not sure what else you'd want to do, another option might be detecting frisbees (depending on where your camera is mounted, you can probably do some simple object detection).

Code
OpenCV is the standard computer vision libraries that a lot of others are built on top of. Team 341 basically used a JavaCV (which certainly includes access to OpenCV as well as other CV libraries it looks like).

Getting data
The FRC documentation from 2012 has some good info on the process. Basically you have some unique feature you want to identify in the image. First you usually do something to enhance that feature (in this case since it's reflective making it grayscale and then using a threshold for luminance is one way). Basically after that you have all the pixels you want to identify as features differentiated from the rest of the image. Then you can usually find something in a library like OpenCV that will detect those features (in this case rectangles).

Depth data is a little more complicated. You might be able to do it based on luminance or color, but the simpler way is to base it on size. You know the exact dimensions of the shape you're dealing with so with a little trig you can calculate distance.

I'm no expert on CV stuff but I've played around with it and I know some people who are interested.

Processing
It looks like 341 just ran their tracking software on the driver station (or some laptop). I'd say this might be easier to set up than dealing with another device.
If you're looking at alternatives Andy mentioned other teams have successfully used the Beagleboard to do their image processing. Looks like it's just an open ARM board that uses a TI chip. Not sure if TI is giving you discounts on those or anything but if you want a lower cost alternative a friend of mine bought some of these when he was in China and may have some he'd be willing to sell (otherwise even with shipping I think they're ~$75, basically the Chinese knock off of the raspberry pi but with better specs which would help for CV applications).