Complete Guide to Designing Camera Hardware for Space/Embedded Systems

Camera design back in 2014 when I was starting out
The design is actually a copy of EstCube-I's Imaging Payload

I am currently responsible to oversee SNUSAT-1 and SNUSAT-2's (Seoul National University's CubeSat) electrical system design including mission critical (SNUSAT-2) camera and on board computer design. When I first started out back in early 2014, i was entrusted with the task of integrating a Gopro 4 camera into the system, however, much has changed since then and the design has opted for a modular, custom designed camera. Since this was never done before, the team went through a lot of confusion, pain, anger, depression and frustration to come up with viable solution for an imaging payload. Although a lot of work still remains software-wise, SNUSAT-I is starting to see an increased level of maturity in understanding camera hardware architecture. This post basically is intended to document that. 

Stripped GoPro Hero 4 with Near-IR capability for night imagery.
The payload was tested on the Korea Cansat Competition 2014.
The team came third in the competition.

One should, before building a camera, understand the parameters for a higher performance camera. According to Prof. Kimura from Tokyo University of Science [HERE], who has worked on imaging payload on KITE and Hidoyoshi 3&4 satellites, has outlined two primary factors:

1. The speed of the processing unit (higher MHz, better performance)
2. Volatile storage (higher storage, higher performance)

Take a moment to take that in. It's just two (besides the imaging sensor) that's responsible for overall performance. Choose a processor with higher speed (preferably with camera interface)and choose a SDRAM/SRAM with bigger space and start working on the hardware.

Prof. Sinichi Kimura holding his modular image processing board

The big question though is when do you actually start designing the hardware? One mistake I did was to work on the hardware and software in parallel. Ideally, this should work but most probably, if without much experience, there is a good chance that the one of them, either the software or the hardware, might be malfunctioning. This not only creates issues in trying to sort out what the issue is but also wastes a lot of time. 

So the idea is to take one step at a time. Either you have a working software or either you have a working hardware. If you have confidence on either one of them, it's easier to develop the other and test as you go. 

For PixelPlus imaging sensor POA030R VGA CMOS sensor that I was working with in 2015, I was doing both; building hardware and the software at the same time. When issues with the camera arose, I wasn't sure whether it was the hardware that I designed at fault or the software.

Image module designed for POA030R. V2.1 is on focus while V1.0 is in the background
The sensor responded through the I2C, register values are correct but did not spit out image data
Because I was building both hardware and software, I am still not sure what the problem is.

To ensure that the hardware is working, the following steps can be undertaken:
1) Buy a development kit for your processor
2) Buy an image module for your camera
3) Buy a SDRAM module with the selected SDRAM
4) Buy a uSD module to store data

STM32 Development Board with MT9D111 image sensor mounted.
This ensures that your initial hardware is working

STEP A: Work on a base code

Anyone who has worked on a camera will tell you how painful it is to work on the software. Datasheets are long and boring and there a shit load of registers to work on. So this will take time and a lot of patience. 

My first priority was get the I2C, two wire interface, working. Most cameras (On Semiconductor, PixelPlus, ClairPixel) have I2C interface for control (not the image data). If you are working with Omnivision sensors, they have renamed the I2C to SCCB which is basically I2C without the acknowledgment and with delays so with appropriate I2C library tweaking, you should be able to get that working. 

Second priority was to save the data to the internal memory, if permits. If not, depending on the image format (RGB555, RGB888, YCbCr), external memory might be needed. For that, my development kit has a 8MB (32Mb) SDRAM in place. 

Once you have the data coming out, you then want to store it in a non-votalite disk like a micro-sd card. Remember, I am doing this on all modules that I bought and have confidence on the hardware. This allows me to completely focus on my software. 

Getting some data out from the camera and sending it to the serial port.
The data was later saved on the uSD card and the camera was configured to send JPEG.

IMPORTANT: Camera modules require you to give XCLK (External Clock) signal. This basically drives the system in the camera. Without this, the image sensor won't work unless the module has an external oscillator. I spent 6 months trying to figure that out. No datasheet will say that because it's so obvious to them. 

STEP B: Build the Schematics

Here's the thing. If you are just starting out, you should basically copy the pin layout of the development kit. The one I am using is Open429Z-D, which has schematics available. This allows you to make sure that the pin connections you were making while developing the software will work on the custom hardware. For my case, since I am also using the SDRAM, some pins conflicted and they needed to be changed. Besides that, I have tried to place the exact pin connection as the one when I was working on the camera software. 

I have laid out my schematics down. If you click on it, the image will enlarge.
Also note that I have not used the low frequency oscillator, which is intended for Real Time Clock (RTC).

Camera MCU schematics
Please read necessary document [HERE] if using STM32F4xx

Memory for camera 

Some important things to note:
1) I2C lines must have pull ups, logic high (3.3V in my case)
2) SPI lines should have pull ups, logic high (3.3V in my case)
3) Bypass capacitors are necessary. 0.1uF are fine.
4) External oscillators require load capacitance and resistance as shown. To calculate it refer to document [HERE]
5) Analogue power in will be better off with an inductor in place. The value is shown in the schematics. 

STEP C: Create the structure

For satellite applications, especially on small nano scale satellites such as CubeSats, volume poses an important restriction to the design. Accordingly, the pcbs have to be designed in a such a way that it fits the given volume constraint. Since pcb and the camera optics has to be mounted, camera structure has to be designed. 

Structure design for GoPro Camera to be mounted on the satellite
Notice that the optics has no support
This was back in 2014.

Besides volume constraints, attention has to be given for radiation shielding and mass. The initial design in 2015 for SNUSAT-I used a ClairPixel image sensor and the structure was tested in Satellite Testing Tutorial which took place in Kyutech last year. 

Camera structure designed in 2015.
Structure using AL-6061, the lens mount is also custom designed and has been black anodized

The camera structure had to be redesigned after selecting MT9D111 as the image sensor. Instead of building the camera from scratch, the module available in the market has been used.

Camera structures designed by Prof. Kimura

The design by Prof. Kimura struck a cord with me because the camera was highly modular, had an embedded M12 X 0.5 thread as lens mount and was very simple. Copying his design, I tried to make something similar for SNUSAT-I.

Volume constraints giving a hard time

The issue with that was that 1) it was bulky 2) the top portion added unnecessary mass and 3) there were some serious manufacturing issues. 

After consulting with the system design engineer, I came with a simplified, stripped down version but still basically keeping Prof. Kimura's essence of using the camera structure to mount the optics on the structure itself rather than have an additional mounting structure. 

Revised design, with smaller footprint and less mass

One thing I have noticed while designing is that manufacturing process has to be thought before hand. For example, I am going to use milling process and so forth have to think about how a single block of Al can be turned into the structure I want. The round edges that you see in the design are because that a single 3mm drill bit is going to make the holes and so forth a fillet of radius 3 has been applied to edges where sharp corners are not possible. 

In regard to radiation shielding, especially total ionizing dose, a Al-6061 thickness of 1.5mm is sufficient for our mission, according to Vishnu who did his masters in radiation testing for imaging sensors. The thickness used on the structure is 2mm.

The camera just about fitting in the volume provided
Final rendered image

STEP D: Create the layout for PCB

Once you have the dimensions of the PCB, you can now work on trying to fit in the components on the space you have. This can always be tricky.

Things you have to take care of:

1) spacing from the board cutout should be at least 50mils if 2oz of copper thickness is used
2) trace width for 1oz pour is 12mil for 1A current flow for outer layers. Inner layers are 30mil. 
3) The bypass capacitors have to be placed very close to the processor/sdram
4) The oscillator has to be placed as close as possible to the processor
5) The oscillator also needs to be shielded by ground.
6) No trace beneath the oscillator should run
7) Power lines (3.3V and GND) has to be as short as possible. Having an additional power plane helps.

Final layout, Cu pour done on both top and bot connected to GND

I will probably have to write a blog post on using Manual+Autoroute, on trace widths, vias and component placing like the oscillator and bypass capacitors, but for now here's the final layout. Wasn't easy fitting all the components on a 40*45mm board but there you go, had it done. Plus cheated a bit with 2oz pour for less trace width and 4 layers (Top, GND, 3V3, Bot). If you electrical majors have any suggestions regarding EMI, post it down here.

Showing Top plane with gigantic LQFP144 STM32F429ZI package
Red: Oscillator, you will notice the trace is as short as possible
Black: Bypass capacitor, that's how the trace to the processor should be

Anyways, there you go. From concept design to structure to custom pcb for camera hardware. Software won't be this easy, but as they say, one step at a time. 


Popular Posts