Leading the way for Miniature Space Cameras by Prof. Kimura of Tokyo University of Science

Engineering Model of miniature camera developed by Prof. Kimura
for JAXA's REX-J mission on ISS (International Space Station)
Structure is made out of common Al-5052 with M13 knot to change the focus of M13 COTS lens
Cameras have been a bane of my existence ever since I took on developing imaging system for our two kilo satellite. The road has been dusty, full of uncertainties and very very frustrating in both in terms of hardware and software. Yet, there are people crazy enough to make the leap forward and gain specialized experience in a field where very few undertake; building space camera's from scratch and excelling in it.

Prof. Kimura is one such exception. Before being quite renowned for his miniature camera development for nanosatellite, micro-satellite technology (his camera was on the Hidoyoshi-3/4 50kg satellite) and for space in general (taking selfies for KITE's solar sail mission), he was building cameras from scratch. I mean taking an image sensor, building a board and processing unit for it and then making the software work. When asked how much time it took for him, as a doctor student with wealth of knowledge on embedded systems engineering, he looked at me and said a year.

A year. A strenuous, frustrating year.

Some of his earlier work on camera development which have already acquired flight heritage
"Most people don't bother building cameras," he adds which is true given that there is a wealth of modules available on the market. Why build it when already there?

"Since imaging modules are saturating the market, finding people who have a hobby of building cameras are rare." Rare they are.

I asked him, for someone like me who's starting out and beginning to understand the basics underlying embedded cameras, if there's any book that's worth looking into.

"Not really. Nope. I can see why you are asking me that but for me too, when I started out, there was no handbook that helped. Each case is different. Since there's no industrial standard, it's difficult to recommend any." It's a real shame, the information is so scattered that it's hard to find any definite book for anyone who's starting out. And I know that because I had the same issue.

Prof. Kimura (second from left) and Prof. Jeung (next to him on the right)
 discussing camera development in Tokyo University of Science
One thing I was really curious to ask was what parameters of a processor defined the performance of a camera (besides, you know, the image sensor itself). I have had this confusion for a long time while selecting the computer for the camera and his answer was simple;

"Processor speed and the working memory"
What about FPU (Floating Point Unit)?
"I have had people say about how important that is but from my experience, I have never had to think about it"

So here's some conclusion I have made about our discussion on embedded cameras:

1) Camera performance depends on the image sensor (DUH!), processing speed (MHz) and the volatile storage (SDRAM for example has to have higher memory). And that's about it.
2) Power lines has to be as short as possible
3) Better to have an oscillator for the camera  itself
4) Placing serializer and deserializer for data lines to A) reduce number of wires B) avoid noise
5) Placing ground underneath the oscillator important
6) Making a four layer PCB ideal. For compact designs, 6 layers with one power, one ground and two signal planes
7) His earlier cameras were all operating on bare-bone software which means no RTOS implementation and no linux (that also means OpenCV) running.
8) Radiation issues for SDRAM, such as the SEL (Single Event Latchup) can be solved by reset. My understanding was that the high current due to the latchup could damage the hardware. Also I thought reset needs to be done quickly in order to avoid it. Both deemed unnecessary. Reset can be done with delay
9) Image sensors, like processors, have analog power input. General understanding is that these require different ground as analog circuits are sensitive to high frequency noise. To reduce that digital and analog grounds have to be different. However, according to Prof. Kimura, the compact design means that often these grounds are the same. If however, they are designed to be different, it is possible to connect them using bead
10) RGB Bayer layer image sensors can be changed into GR-NearIR sensor using triple band pass filter physically added to the optics structure. My understanding was that the image sensor had to be manufactured with the filter

The reason why we ended up there in the first place is that we are currently collaborating with Kimura lab for our high performance, medium resolution camera for 30m GSD remote sensing. As we realized that we required not just the RGB information but also RGB and Near-IR, and since prof. Kimura recently helped develop the MCAM which can provide GR-NearIR data, collaboration and technology exchange was deemed important. Prof. Kimura was happy to comply.

Some images from the visit:

Prof. Jeung giving the opening statements on the presentation on space camera
Presentation on space camera development in Seoul National University
at Tokyo University of Science, Japan
Pin hole camera structure with lens developed by Kimura Lab
OV sensor module as compact as it can be
The image sensor has a wealth of heritage and has shown considerable radiation tolerance

Meeting on camera hardware and software development for SNUSAT-II

Comments

Popular Posts