It's working fine for me, both on Windows and Linux, on multiple platforms.
I'm afraid I cannot post the source codes, but I can give you the basic idea and save your time with some of the tricks.
You'll use a USB sniffer to monitor the communication with the device while you're using Fujitsu's software, and reproduce those with libusb.
At first, there is a somewhat long initialization. I got no idea what each message does, but the device behaves pretty nicely all times, giving always the same responses.
One of the messages will be a long 307200-bytes package. I've got no idea of it's meaning, but it seems important =)
If you find out what it is, please let me know. hard-coding it is taking too much space
Then, you start the main-loop. You'll use some control messages to check the "built-in-hand-detector", or whatever they call it. It's a clever trick: The device casts 4 spots of light in your hand, and automatically detects their positions in the image. Based on this, it calculates the distance of your hand to the device.
The first thing to do in the main loop is to retrieve this information. I can't remember the format, but I use only 4 bytes of the package, which indicates the calculated distance of each spot.
With this information, you can show messages like "Place your hand, Too far, Too near, inclinated to left, etc".
You can also use this information to estimate the resolution of the image, which is proportional to 1/distance.
I suppose that you could apply perspective correction on the image too, in case the hand is inclinated, but I didn't go that far.
When you find the values to be in a nice range, you ask the device to capture the image.
It can capture 3 kinds of image. In the application I sniffed, all of them were retrieved always, but it will work fine if you choose to retrieve just one:
- The light-spot: The whole image is dark, and the 4 spots and be seem clearly over your hand. This image is 640*240, If I'm not mistaken
- Camera, without leds: Captures a single 640*480 frame of video. But, since the infrared leds are off, you can't see the veins.
- Camera, with leds: Captures a single 640*480 frame of video. Leds are now on, and you can see the veins. Unfortunately, this is the only image encrypted.
Luckily, the encryption is pretty simple: A simple XOR beetwen the actual image and a fixed mask. Also, just the middle 240 lines are affected.
To recovery the mask, all I had to do was capturing a strong light. Since it's "too-white", all the pixels go to 0xFF, and the mask will be the NOT of the data returned.
The only problem is: I have just one device to test. It works all the times with the same xor-mask, but I'm afraid this mask might be device-specific.
Other possibility I see is that the mask is related to the big 307200-bytes package sent during initialization. If that's the case, it would be nice to generate a initialization package so that applying the XORs isn't needed.
Even better, being able to generate these would save me ~500K of data.
If you find something else about it, please tell me.
Regards
Paulo