SNUSAT-1 CubeSat Camera Development Daily Report (CDDR)- Day 1


As of now, I have decided to use the blog as a base for me to record my research on a daily basis. I have a grown a habit of breaking a lot of promises lately but I do tend to keep this one super glued and intact. The whole idea revolves from the fact that I will have to eventually write a reports and papers on the research I have been doing, so why not as well document it in small, discrete portions everyday. The only difference will be that I won't be posting this up on fb or sharing this on social media as I usually do. 

Ok hmmm so where should I start.

As I mentioned in the previous blog that the GoPro camera has been decided not to be used as the imaging payload for the SNUSAT-1, I basically have to start from the very beginning. I was always afraid that someone might push the reset button on this one but now looking at some of things I have recently got to learn, I do feel it was necessary to do so.

In other words, I was just barely scratching the surface of camera development. With the GoPro, everything was there; the software, the hardware, the processors, image sensors and the lot. All I had to do was to find an external way to control it and be able to extract images out from it. I was able to do so but with programming languages of Arduino and Processing and using basic prototyping hardware such as the Arduino itself which uses much simplified, high level language. Add to the host of libraries that are available and basically what you are programming is just as simple as you can possibly get. Like sending a A code and getting the camera to take a picture. Its not as simple as that, is it?

Getting on the back-to-the-basics, build-from-scratch kind of thing, I was first assigned to select a microprocessor suitable to host a CMOS camera module (will touch on the camera module later). The companies that produced them included TI, STM and ATmel. Among these the STM32F1..F2..F3..F4 with Cortex-M3 and Cortex M4 seemed to be the most popular although TI's 16 bit MSP430 has been used in a number of CubeSat missions as well.

Since the processor needs to have a way to control a CMOS camera module, a processor with DCMI (Digital CaMera Interface) has been chosen. The one that has been seen in use has been the STM32F407 used by the Stanford's SNAP nano-satellite. I am really not sure what DCMI really does, how it works or what character it corresponds to from the TV series Sopranos but what I do know that it is possible to control a CMOS module without DCMI. the TTL serial camera control through an arduino illustrates the point, however, using it on a non-development board could make a difference. Meaning that a processor having a DCMI could become a camera processor and the one without it cannot. (Apparently having DCMI makes it a lot easier to interface a camera. I should have just guessed it from the name duh!)

Sticking with the processors, I found that people have been doing a lot of camera based project based on the STM32F407 MCU with the STM32F4 Development board. Its priced at $20 here in Korea and I would very much like to get one of these and tryout with the camera modules we already have at the lab. Right now, with me I have MT9D11 VGA camera module amongst others.

Some of the works and DCMI JPEG issues have been linked HERE, HERE and HERE 

The reason for being worried about JPEG is because that's the format I want the camera to compress it. a VGA resolution size will be about 40KB but I might be able to work my way and decrease it once I have a better understanding of how these compression things work and how I can access it. 

Some of the initial conditions I have set forth for the camera to be are:
1, CMOS sensor; less power, less hardware
2. needs to have I2C communication. Really not sure why I put this. need some explaining to myself. And convincing too
3. JPEG compression.
4. Input of 5V? Most modules I have seen have 3.3V as input. Does this really make a difference?

Some constraints have been initialized modelling the GoPro camera although the power consumption on other modules will drastically fall. 
<100g and <1.2W

This would be the constraints for overall camera payload and not just the module. The power consumption will also depend on the clock speed of the processor (just finding that out). the processor of my choice, the STM32F0407 has a clock speed of 164Mhz but I read its possible decrease it.

I think for me, the focus should also be on properly learning how videos becomes stills and stills becomes lines and lines become pixels and pixels become 8-bit data. That's in case of Monochrome or YCbCr. There's still much to learn about them, how they get stacked up in registers and how access is possible. 

There's a nice tutorial about understanding pixels on one of the link provided above.

Vocabs that I came across but couldn't understand what they meant: Data Count Registers, FIFO, DMA, NTDR

I guess one of the primary inspiration for camera development has been Henri Kuuste who has also kind enough to respond to my emails and send me this [paper] before it was even published. His [camera] can be seen here.

Time to go get some sleep




Comments