Poster Session T1

Tuesday T1 Poster Session 9:45 – 11:15 am

Signal Processing Applications: Finance, Radio Astronomy, Radar

T1-1 Spectral Analysis of Stock-Return Volatility, Correlation, and Beta

Shomesh Chaudhuri – Massachusetts Institute of Technology, USA; Andrew W. Lo – Massachusetts Institute of Technology and Sloan School of Management, USA

We apply spectral techniques to analyze the volatility and correlation of U.S. common-stock returns across multiple time horizons at the aggregate-market and individual-firm level. Using the cross-periodogram to construct frequency band-limited measures of variance, correlation and beta, we find that volatilities and correlations change not only in magnitude over time, but also in frequency. Factors that may be responsible for these trends are proposed and their implications for portfolio construction are explored.

 

T1-2 Fast Raw Data Simulator of Extended Scenes for Bistatic Forward-looking Synthetic Aperture Radar with Constant Acceleration

Ziqiang Meng, Yachao Li, Mengdao Xing, Zheng Bao – Xidian University, P.R. China

Synthetic aperture radar (SAR) raw data simulator is an important tool for parameter-optimizing and algorithm-testing, particularly for those complicated configurations in which real raw data is difficult to obtain. As a new and special imaging mode, bistatic forward-looking SAR with constant acceleration (BFCA-SAR) can perform two-dimensional imaging for targets in the straight-ahead position over mono-static SAR. But there exist more complicated square roots and high-order terms in range history owing to high velocities and accelerations from both platforms. In addition, space variances in phase terms of two-dimensional frequency spectrum (2-D FS) make it difficult to gain echo data accurately. In this paper, a fast scene raw data simulator for BFCA-SAR based on quantitative analysis and effective correction of phase space variance is proposed. With high precision, our method can generate raw data more efficiently than traditional algorithms.

 

T1-3 Analysis to Distinguish Range Deception Jamming with Kernel Local Fisher Discriminant

Sajjad Abazari Aghdam – Florida Atlantic University, USA; Mahdi Nouri– Michigan University of Technology, Iran

A deception jamming recognition method is proposed based on Adaptive Kernel Local Fisher Discriminant Analysis. The digital radio frequency memory (DRFM) in jammer creates multiple repeat false targets, are commonly utilized in practical applications for limitation of defense radar tracking and discrimination unit. So as to face with decision scheme groups of discriminating among targets and RGPO signals, an analytic form of the embedding transformation and the solution is resorted which can be simply calculated by solving a generalized eigenvalue problem. The practical utility and scalability of the LFDA algorithm can diminish non-linear dimensionality states by applying the kernel trick. The experimental consequences demonstrate that the probability of recognition accuracy performance of the proposed KLFDA in RGPO deception jamming algorithm is greater than 90% when SNR is higher than 4dB.

 

T1-4 The Cross-Ambiguity Function for Emitter Location and Radar – Practical Issues for Time Discretization

James Schatzman – N-ask Inc. and University of Colorado at Denver, USA

The difference between continuous time and discrete time Cross-Ambiguity Functions can be significant. Both narrow band and wide band CAFs can be computed exactly with discretization, but the usual implementation of the narrow band CAF introduces an error which increases with Frequency Difference of Arrival (FDOA). The error is largest for modulations with non-symmetric CAF plane signatures and for large FDOA values. The wide band CAF does not have this deficiency whether or not a variable delay/variable rate filter is employed, but if the filter is employed, the filter itself introduces a similar error. Simple, relatively low-cost post-processing can largely correct the discretization error for the narrow band CAF; the wide band CAF is more expensive.

 

T1-5 You’re Crossing the Line: Localizing Border Crossings Using Wireless RF Links

Peter Hillyard, Neal Patwari, Samira Daruki, Suresh Venkatasubramanian – University of Utah, USA

Detecting and localizing a person crossing a line segment, i.e., border, is valuable information in security systems and human context awareness. To that end, we propose a border crossing localization system that uses the changes in measured received signal strength (RSS) on links between transceivers deployed linearly along the border. Any single link has a low signal-to-noise ratio because its RSS also varies due to environmental change, (e.g., branches swaying in wind), and sometimes does not change significantly when a person crosses it. The redundant, overlapping nature of the links between many possible pairs of nodes in the network provides an opportunity to mitigate errors. We propose new classifiers to use the redundancy to estimate where a person crosses the border. Specifically, the solution of these classifiers indicates which pair of neighboring nodes the person crosses between. We demonstrate that in many cases, these classifiers provide more robust border crossing localization compared to a classifier that excludes these noisy, redundant measurements.

 

T1-6 Compensating for Oversampling Effects in Polyphase Channelizers: A Radio Astronomy Application

John Tuthill, Grant Hampson, John Bunton – Commonwealth Scientific and Industrial Research Organisation, Australia; Frederic J. Harris – San Diego State University, USA; Andrew Brown, Richard Ferris, Timothy Bateman – Commonwealth Scientific and Industrial Research Organisation, Australia

In order to maximize science returns in radio astronomy there is a constant drive to process ever wider instantaneous bandwidths. A key function of a radio telescope signal processing system is to divide a wide input bandwidth into a number of narrow sub-bands for further processing and analysis. The polyphase filter-bank channelizer has become the primary technique for performing this function due to its flexibility and suitability for very efficient implementation in FPGA hardware. Furthermore, oversampling polyphase filter-banks are gaining popularity in this role due to their ability to reduce spectral image components in each sub-band to very low levels for a given prototype filter response. A characteristic of the oversampling operation in a polyphase filterbank, however, is that the resulting sub-band outputs are in general no longer band centered on DC (as is the case for a maximally decimated filterbank) but are shifted by an amount that depends on the index of the sub-band. In this paper we present the structure of the oversampled polyphase filterbank used for the new Australian Square Kilometer Array Pathfinder (ASKAP) radio telescope and describe a technique used to correct for the sub-band frequency shift brought about by oversampling.

 

T1-7 Multi-Tier Interference-Cancelling Array Processing for the ASKAP Radio Telescope

Richard A. Black, Brian D. Jeffs, Karl Warnick – Brigham Young University, USA; Gregory Hellbourg, Aaron Chippendale – Commonwealth Scientific and Industrial Research Organisation, Australia

The ASKAP radio telescope in Australia is the first synthesis imaging array to use phased-array feeds (PAFs). These permit wider fields of view and new modalities for radio-frequency interference (RFI) mitigation. Previous work on imaging-array RFI cancellation has assumed that processing bandwidths are very narrow, and correlator integration times are short. However, these assumptions do not necessarily reflect real-world instrument limitations. This paper explores adaptive array cancellation algorithm effectiveness on ASKAP for realistic bandwidths and integration times. With ASKAP’s beamforming PAFs on each dish, followed by a central correlation processor across beamformed signals from all dishes, one may consider algorithms that span multiple levels in the hierarchical signal processing chain. We compare performance for several subspace-projection-based algorithms applied to different tiers of this extended architecture. Simulation results demonstrate that it is most effective to cancel at the PAF beamformers.

 

T1-8 A Reconfigurable Optically Connected Beamformer and Correlator Processing Node for SKA

Grant Hampson, John Tuthill, Andrew Brown, John Bunton, Timothy Bateman – Commonwealth Scientific and Industrial Research Organisation, Australia

For many decades Digital Signal Processing (DSP) nodes have been designed for processing digital data received from arrays of radio telescopes. Common threads in all these nodes are: digital communications, processing and memory. Fundamentally the aim of each system was to provide the greatest operational capability for the technology available at that time. As the systems grew in size it became apparent that a key performance indicator was how processing nodes communicated. Poor communication could result in delayed schedules, reduced operational performance and higher system costs. The Square Kilometre Array (SKA) project represents a quantum leap in system size relative to current radio astronomy telescopes. This paper explores current work in this area and introduces the possibility of a fully optically connected processing and memory node. Such a node could be utilized for multi-stage polyphase filterbanks, beamforming and correlation. The application presented here is radio astronomy, but it could also be applied to defence and telecommunication systems.

 

T1-9 Cancelling non-linear processing products due to strong out-of-band interference in radio astronomical arrays

Yifeng Wu, Richard A Black, Brian D. Jeffs – Brigham Young University, USA

Radio astronomy instrumentation uses phased array feeds to provide radio telescopes with wider fields of view and enhanced beam control for detection and interference suppression. The standard assumption in Radio Astronomy is that receiver amplifiers operate in a linear region. In the presence of strong radio-frequency interference (RFI), however, it is possible to drive the amplifiers non-linear. This can cause out-of-band RFI to become non-linear and mix harmonics into the filter passband. In this scenario, classical RFI-mitigating beamformers may not be very good at suppressing the interference. This paper analyzes the effect of several beamformers in suppressing interference resulting from non-linear amplifiers. Experimental results show that a subspace projection beamformer is able to suppress the interference despite the nonlinear RFI.

 

T1-10 Subspace smearing and interference mitigation with array radio telescopes

Gregory Hellbourg – Commonwealth Scientific and Industrial Research Organisation, Australia

Array radio telescope are suitable for the implementation of spatial filters. These filters present the advantage of canceling potential radio frequency interference (RFI) while recovering uncorrupted Time-Frequency data, of interest to astronomers. Although information regarding the sources of RFI can be a priori known or reliably inferred, the complexity of radio telescope systems randomizes the formulation of the subspace spanned by the RFI due to a lack of calibration or characterization. This knowledge is however necessary for building an efficient spatial filter, and needs therefore to be estimated. A classic approach for estimating a signal subspace in array signal processing consists in decomposing in eigen subspaces the array Sample Covariance Matrix (SCM) to isolate the signal subspace from the noise subspace. The SCM is evaluated over a finite set of array data vector samples, i.e. over a short observation time. The relative motion between the telescope and the interferer during integration results in an RFI subspace smearing. This effect leads to an increase of the dimensionality of the subspace. The smearing effect affects the RFI subspace estimation based on a low-rank approximation of the SCM, and therefore the quality of the associated spatial filter. This paper analyzed the RFI subspace smearing effect, and shows an example of its impact on RFI mitigation for radio astronomy.

 

Compressive Sensing and Reconstruction

 

T1-11 Approximate Regularization Paths for Nuclear Norm Minimization Using Singular Value Bounds

Niclas Blomberg, Cristian Rojas, Bo Wahlberg – KTH – Royal Institute of Technology, Sweden

The widely used nuclear norm heuristic for rank minimization problems introduces a regularization parameter which is difficult to tune. We have recently proposed a method to approximate the regularization path, i.e., the optimal solution as a function of the parameter, which requires solving the problem only for a sparse set of points. In this paper, we extend the algorithm to provide error bounds for the singular values of the approximation. We exemplify the algorithms on large scale benchmark examples in model order reduction. Here, the order of a dynamical system is reduced by means of constrained minimization of the nuclear norm of a Hankel matrix.

 

T1-12 Learning Anomalous Features via Sparse Coding Using Matrix Norms

Bradley Whitaker, David Anderson – Georgia Institute of Technology, USA

Our goal is to find anomalous features in a dataset using the sparse coding concept of dictionary learning. Rather than using the averaged column l2 (vector) norm for the dictionary update as is typically done in sparse coding, we explore using three matrix norms: l1, l2, and l-infinity matrix norms. Minimizing the matrix norms represents minimizing a maximum deviation in the reconstruction error rather than an average deviation, hopefully allowing us to find features that contribute significantly but infrequently to sample training points. We find that while solving for the dictionaries using matrix norm minimization takes longer to compute, all three methods are able to recover a known basis from a simple set of training data. In addition, the l1 matrix norm is able to recover a known anomalous feature in the training data that the other norms (including the standard averaged vector l2 norm) are unable to find.

 

T1-13 Sparse Recovery Using an SVD Approach to Interference Removal and Parameter Estimation

Charles Hayes, James H. McClellan, Waymond R. Scott, Jr. – Georgia Institute of Technology, USA

This work focuses on parametric sparse sensing models and looks to improve L1 regularization results when the model dictionary is strongly coherent and/or regularization parameters are unknown. The singular value decomposition (SVD) of the model’s dictionary matrix is used to construct signal and noise subspaces. A method that uses the measurements to automatically optimize the subspace division along with a way to estimate the noise level is introduced. The signal-noise subspace decomposition is then extended to deal with an interfering signal that lies in a known linear subspace by modifying the SVD and performing the sparse recovery in the modified signal subspace. The proposed technique is applied successfully to the Discrete Spectrum of Relaxation Frequencies (DSRF) extraction problem for Electromagnetic Induction (EMI) underground sensing where a strong interference from the soil is a significant concern.

 

T1-14 Multi-Frame Super-Resolution for Mixed Gaussian and Impulse Noise based on Blind Inpainting

Ismael Silva, Boris Mederos-Madrazo, Leticia Ortega Maynez – Universidad Autonoma de Ciudad Juarez, Mexico; Sergio D. Cabrera – University of Texas at El Paso, USA

This paper proposes a robust multi-frame super-resolution algorithm to produce a high-resolution image. In merging the nonredundant information from shifted, rotated, blurred, noise-corrupted, low-resolution observations of the same scene, the approach registers the frame and reduces the impact of the distortions. The method is a generalization of a recently published blind inpainting algorithm to the multi-frame super-resolution case including both Gaussian and Impulse noises. Most multi-frame super-resolution algorithms, only consider blurring and Gaussian noise, ignoring noises that arise in real applications such as when processing time-of-flight camera depth images. Examples on simulated scenarios and real images produce results that compare favorably with other methods and clearly justify the benefits of this imaging model and the reconstruction method presented.

 

T1-15 Polarimetric target decomposition based on sparse attributed scattering center base decomposition

Jia Duan – School of Communication Engineering, Hanzhou Dianzi University, P.R. China;

Lei Zhang – National Laboratory of Radar Signal Processing, Xidian University, P.R. China;

Yifeng Wu – Brigham Young University

In order to promise the completeness of distributed components targets in polarimetric target decomposition (PTD) under low SNR circumstances, a novel PTD method based on the attributed scattering center (ASC) is proposed for man-made targets in SAR/ISAR images. By decomposing the signal of an interesting target onto the sparse ASC base, the polarimetric characteristics of targets can be exploited by performing PTD on the extracting coefficients of ASCs instead of pixels in conventional PTD algorithms. Experimental results confirm the improvement of the proposed algorithm.

 

T1-16 Fast Imaging In Cannula Microscope Using Orthogonal Matching Pursuit

Ahmad Zoubi, Kishan Supreet Alguri, Ganghun Kim, V. John Mathews, Rajesh Menon,

Joel Harley – University of Utah, USA

Fluorescent miscroscopy is a state-of-the-art method for creating high contrast and high resolution images of microscopic structures and has found wide application in microendoscopy (i.e., imaging cellular information from an optical probe within an animal). Cannula based microscopy methods have recently shown great promise for efficient microendoscopy imaging. Yet, performing real-time imaging with cannula methods have yet to be achieved due to the high computational complexity of the algorithms used for image reconstruction. We present an approach based on compressive sensing to improve computational speed and image reconstruction quality. We compare our approach with the state-of-the-art implementation based on direct binary search, a non-linear optimization technique. Results demonstrating up to 70 times improvement in the computation time and visual quality of the image over the direct binary search method are included in the paper.

 

T1-17 On The Block-Sparsity Of Multiple-Measurement Vectors

Mohammad Shekaramiz, Todd Moon, Jake Gunther – Utah State University, USA

Based on compressive sensing (CS) theory, it is possible to recover signals which are either sparse or compressible under some suitable basis via a few number of non-adaptive linear measurements. In this paper, we investigate recovering of block-sparse signals via multiple measurement vectors (MMVs) in the presence of noise. In this case, we consider one of the existing algorithms which provides satisfactory estimate in terms of minimum mean-squared error but a non-sparse solution. Here, the algorithm is first modified to result in sparse solutions. Then, further modification is performed to account for the unknown block sparsity structure in the solution, as well. The performance of the proposed algorithm is demonstrated by experimental simulations and comparisons with some other algorithms for the sparse recovery problem.

 

T1-18 Dynamic Model Generation for Application of Compressed Sensing to CRYO-Electron Tomography Reconstruction

Sally Wood – Santa Clara University, USA; Ernesto Fontenla – Baylor College of Medicine, USA; Christopher Metzler – Rice University, USA; Wah Chiu – Baylor College of Medicine, USA;

Richard Baraniuk – Rice University, USA

Cryo-electron tomography (cryo-ET), which produces three dimensional images at molecular resolution, is one of many applications that requires image reconstruction from projection measurements acquired with irregular measurement geometry. Although Fourier transform based reconstruction methods have been widely and successfully used in medical imaging for over 25 years, assumptions of regular measurement geometry and a band limited source cause direction sensitive artifacts when applied to cryo-ET. Iterative space domain methods such as compressed sensing could be applied to this severely underdetermined system with a limited range of projection angles and projection length, but progress has been hindered by the computational and storage requirements of the very large projection matrix of observation partials. In this paper we derive a method of dynamically computing the elements of the projection matrix accurately for any basis functions of limited extent with arbitrary beam width. Storage requirements are reduced by a factor of about 1000 and there is no access overhead. This approach for limited angle and limited view measurement geometries, will enable improved reconstruction performance.