Significant advances have been made in developing normal-incidence sensitive quantum-dot infrared photodetectors (QDIPs) for midwave- and longwave-infrared imaging systems. QDIPs with nanoscale asymmetrical structures of the quantum dots can exhibit spectral responses tunable through the bias voltages applied. This makes it possible to build spectral imaging system in IR range based on single QDIP, without any spectral dispersive device upfront. Further more, unlike conventional systems whose spectral bands are fixed for various tasks which leads to data redundancy, the QDIP based system can be operated as being adaptive to scenes if different sets of operating bias voltages are selected for different tasks. To achieve such adaptivity, optimization algorithms must be developed to find the scene-based operation bias voltages set which maximizes the spectral context inside the output data while reducing the data redundancy. In this paper, we devise a series of optimization methods based on a recently developed geometrical spectral imaging model (Wang et al., 2007):1 In the beginning, an scene-independent set of bias voltages is selected to maximize the average signal-to-noise ratio (SNR) of the sensor. Then, some bias voltages are added or removed based on the captured data. This dynamic optimization process is performed throughout the imaging process so that the balance between data information and data volume is always achieved. Due to the universality of the algorithm, this optimization process can be applied to any spectral sensor whose spectral response functions are known.