SNUSAT II Imaging Payload SDR Documentation


I had been taking my time off from my usual blogging duties mainly due to the fact that there's only 24 hours in a day and 60 minutes in an hour as opposed to my proposed 30+ hours a day with an additional 30 minutes to each hour because only then would I be able to clean up all the scheduling and time-mismanagement mess I have got myself into. It's no joke that my alarm rings at 4 a.m. these days only to get snoozed about a million times before I finally force my buttocks to get into the shower. 

With the dreaded December approaching, things are only going to get worse. I thought, why not just procrastinate a little and worry a day before the all the paper deadlines and take-home-exam extended deadlines appear and write a blog post instead?

System Design Review for SNUSAT-II last friday
As you might know, Seoul National University, at this point in time, is actually building three satellites; the incomplete SNUSAT-I, newly initiated SNUSAT-II and the GPS lab's satellite with a name that should start with SNU in it. The documentation here presents the presentation I had given for the System Design Review (SDR) for SNUSAT-II's payload development and questions raised subsequently. I have, in these few days, have tried to come up solutions even though these solutions are as vague as it can possibly be. 

So let's get warmed up, shall we?


Currently our laboratory is collaborating with Prof. Kimura of Tokyo University of Science to help use design our HIRES (High Resolution Camera). Prof. Kimura has already been involved in satellite imaging solutions for systems such as IKAROS and HIDOYOSHI-3/4, and he and his team brings in a wealth of experience. In a way, we want to reach the level they are in at the moment. 



The ground level requirements for the payload system are described on the next slide. Besides mass, volume and power requirements, the FOV (Field of View) requirements are mission critical meaning that right hardware has to be chosen specifically to meet the requirements. One important thing to note is that the GSD (Ground Sampling Distance) can only be "high resolution" if the GSD < 10m however, we have mentioned the requirements to have smaller than < 20m. Technically, that lies in the "medium resolution." 


The current issue we have with building satellites is limited human resource. That means each individual is responsible for more work meaning that it's important to overlap hardware designs so that the same system could be applied to multiple subsystems. What I mean by that is for instance, camera and the star-tracker used for attitude control could have the same hardware. This not only shorten development time, but save both money and could promote healthy teamwork as the approach could be seen as a win-win. 

However, in the last couple of days, I have found a couple of problems with trying to have a modular system as different subsystem demand different requirements. And even though the hardware might be the same, different volume constraints means that those limiting factors should be taken into consideration before actually making the hardware. Additionally, if one subsystem want's to change a critical component in the design, it almost has a ripple effect on all the other subsystems that have already embraced the philosophy.

Nevertheless, the pros out weigh the cons. Especially on the software side of things.


Moving on to the actual substance and not the style, the camera's processor has to be first selected. In current era of embedded systems, there are multitude of GPPs (General Purpose Processors), DSP's (Digital Signal Processors), FPGA (Field Programming Gate Array), System on Chip (SoCs) and GPU's (Graphic Processing Unit). Yet, with the ARM Cortex Mx architecture, GPPs have gone on to become much more fast, reliable and have included FPU (Floating Point Unit) with inbuilt DSPs. The only issue was whether going for FPGA or an ARM Cortex GPP was the practical approach.

The green signifies better selection.


Much of how we tackle designing a system also depends using components that have a measure of heritage to it. SNUSAT-I's non-critical camera payload uses a STM32F429ZI processor while the Interface Board III has a PIC24FJ256GA soldered to it as a node computer. These processors were then compared to a newer processor that is out in the market. Given that there's a significant improvement in both peripherals and processing speed, the processor will be chosen.

As you will notice, jumping on from PIC24 to STM32F4 makes sense but changing the processor to STM32F7 is not as convincing. Additionally, the PIC24 lacks a dedicated camera interface making it much difficult to code than the STM counterparts. 


As mentioned earlier, I am working with Vishnu who is responsible in designing the star tracker. He has done some pretty deep research into selecting image sensors for his payload and so forth, asked him for a sensor he has been thinking about implementing. His recommendation was the On Semiconductor's AR0134. The spec comparison between domestically produced, similar image sensor from PixelPlus (their VGA sensor is being implemented on the SNUSAT-I). Although the SnR(Signal to Noise Ratio) and pixel size comparatively bigger, AR0134 has significantly low power consumption at maximum fps (frame rate per second).

On a recent note, exacerbating the stance on PixelPlus' sensors is that the documentation is poorly done and even with constant nagging, they don't seem have a proper system to check if all the documents needed for software development has been provided to the customers.


Once the selection of the sensor was complete for the WAC, the lens had to be selected according to the image format (1/3 inch) of the CMOS (Complementary Metal Oxide Semiconductor) sensor. Given that the satellite's altitude is 500 km, an Angular Field of View of 16 degrees was calculated. At the time of the presentation, I had mentioned that this was in fact, the HAFOV (Horizontal Angular Field of View), however later understanding showed that I had been calculating the DAFOV (Diagonal Angular Field of View). This means that the lens selection calculation has to be done again.


The following image shows a pan-sharped image of Seoul National University Gwanak campus to let the audience have a sense of how much 30m GSD represented. 


Overall specifications of the camera architecture had issue with the newer volume constraints and need to be addressed soon. The volume turned out to be bigger and did on fit on the satellite's CAD (Computer Aided Design).


The preliminary block diagram looks similar to the SNUSAT-I design. The issue raised was why I had chosen two PCB (Printed Circuit Board) and not one to which I replied that the Prof. had a good point. Usually having two PCB's means that the system is going to be heavier and expensive but allows more space in a constrained volume. Depending on the trade-off of component size, mass and volume requirement, the number of PCB has to be chosen. This will be done before the PDR (Preliminary Design Review).



As I mentioned before, unlike the camera payload in SNUSAT-I, SNUSAT-II's optical payload(s) is mission critical. The WAC has to be able to detect a point of interest (fire, volcano eruption, flood), relay that information to the HIRES and take a high resolution image of that particular point. 

Here I discuss evaluate the quantum efficiency (the percentage at which impending photons are converted to electrons in the image sensor, thus defining it's sensitivity) to different spectral bands. While the RGB (Red Green Blue) spectral responses are above 50% as one would expect, the sensor chosen is also sensitive to NIR (Near Infra Red) and has an acceptable (30%) of quantum efficiency.

The importance of the sensor being sensitive to NIR will be discussed further in the slide.

Interestingly enough, Prof. Kimura and his team have been able to convert a normal RGB Bayer filtered CMOS sensor, applied a triple band filter (IRG [Near Infra Red, Red, Green]) to receive a IRG (duh!) image. In the past few days, I have been trying to understand just how that was possible but it does look like a special custom made filter was coated on top of the normal sensor to achieve that. 

It does, however bring some interesting questions such as how exactly was the coating done? Because as we know, the filter coating should correspond with the pixel size as RGB Bayer coating is done similarly. 

It could just not be a coating too. The lens construction could have been customized to cut out certain wavelengths. Further study on this regard has to be made. And soon. 


The reason I am wasting time trying to look for NIR is because while being able to detect POI, NIR plays an important part in allowing the computer to compute fire or snow or even clouds.


The RGB bands can be mixed with the NIR band to produce indices which helps us to separate from water to snow, from vegetation to landslides and so forth. 

For example if cloud had to be detected, the white pixels in an image can be first isolated and then the NDSD (Normalized Difference Snow Detection) applied on the first image and checked for values greater than >0.4 to separate white pixels that mean clouds and white pixels that could mean snow. Implications of choosing a cloud has yet to be thought off, in all honesty.



The HIRES camera will be done incorporation with Prof. Kimura as mentioned in the previous slides and some of the designs he has put forth were compared. With the GSD requirements for HIRES, only HCAM and CANAL-1 fit the bill, however, HCAM's 4 kg weight meant that CANAL-1 is the only viable solution. The values for GSD were first given for 650 km altitude but were re-calculated for 500 km orbit. 


As of now, I have still to collaborate with Prof. Kimura and talk to him about the hardware development he recommends. The hardware specifications for his camera's previous mission has been enlisted on the next slide. The power consumption is high due to the use of FPGA. 

Risk assessment and mitigation strategies had to be envisioned as well. They are presented down.


In the time ahead, a proper road map has to be made to in order to prioritize what needs to be done first. Should I have a proper understanding of the filters first? or should I select the POI and then worry about that later? Should I start ordering the components only after that? How much buffer time do I need the components to arrive, given that they are available? 

These are all the minute scheduling and prioritizing that needs to be done in the days ahead.
But for now, sleep will be on the top of my list.


Comments

  1. Very good and elaborated description. I enjoyed reading and understanding. With the band math and the combination of indices, is it also possible to measure the the depth of snow and depth of water bodies in order to prepare the digital terrain model of snow (volume of snow) and water bodies (bathemetric survey)? Will Canal-1 have that functional capacity?

    ReplyDelete
    Replies
    1. I have not actually looked into finding depth using passive sensors like the multi-spectral imager, so from what I know, it's likely to be difficult to determine depth of water or snow by just using NIR, MIDIR, RGB spectral bands. Even if estimation of the depth is possible, the accuracy might be way off the mark.For now, using visible spectrum and the NIR spectra allows us to determine the area, and not volume and so forth, we are planning on exploiting that possibility.

      Delete

Post a Comment