INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... ·...

233
INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque 978-2-84813-132-0 T H E S E pour obtenir le grade de DOCTEUR DE L’Institut polytechnique de Grenoble Spécialité : Signal Image Parole Télécom préparée au laboratoire TIMA dans le cadre de l’Ecole Doctorale « Electronique, Electrotechnique, Automatique, Télécommunications, Signal » présentée et soutenue publiquement par Saeed MIAN QAISAR le __05 mai 2009__ TITRE : Echantillonnage et Traitement Conditionnes par le Signal : Une Approche Prometteuse Pour des Traitements Efficace à Pas Adaptatifs. DIRECTEUR DE THESE : Mr. Laurent FESQUET CODIRECTEUR DE THESE : Mr. Marc RENAUDIN JURY M.Pierre-Yves Coulon , Président M.Olivier SENTIEYS , Rapporteur M.Modris GREITANS , Rapporteur M.Gilles FLEURY , Rapporteur M.Marc RENAUDIN , Co-directeur de thèse M.Laurent FESQUET , Directeur de thèse

Transcript of INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... ·...

Page 1: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

INSTITUT POLYTECHNIQUE DE GRENOBLE

N° attribué par la bibliothèque 978-2-84813-132-0

T H E S E

pour obtenir le grade de

DOCTEUR DE L’Institut polytechnique de Grenoble

Spécialité : Signal Image Parole Télécom

préparée au laboratoire TIMA dans le cadre de l’Ecole Doctorale « Electronique, Electrotechnique,

Automatique, Télécommunications, Signal »

présentée et soutenue publiquement

par

Saeed MIAN QAISAR

le __05 mai 2009__

TITRE :

Echantillonnage et Traitement Conditionnes par le Signal : Une Approche Prometteuse Pour des Traitements Efficace à

Pas Adaptatifs.

DIRECTEUR DE THESE : Mr. Laurent FESQUET CODIRECTEUR DE THESE : Mr. Marc RENAUDIN

JURY

M.Pierre-Yves Coulon , Président M.Olivier SENTIEYS , Rapporteur M.Modris GREITANS , Rapporteur M.Gilles FLEURY , Rapporteur M.Marc RENAUDIN , Co-directeur de thèse M.Laurent FESQUET , Directeur de thèse

Page 2: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Signal Driven Sampling and

Processing: A Promising Approach

for Computationally Efficient

Adaptive Rate Solutions

By

Saeed MIAN QAISAR

Page 3: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Abstract

ABSTRACT

The recent sophistications in the areas of mobile systems and sensor networks demand more and more

processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most

difficult industrial challenges, in mobile computing. Most of the efforts to achieve this goal are focused on

improving the embedded systems design and the battery technology, but very few studies target to exploit the

input signal time-varying nature. It is known that almost all real world signals are non stationary in nature.

Therefore, we aim to achieve power efficiency by smartly adapting the system processing activity in

accordance to the input signal local characteristics. It is done by completely rethinking the existing processing

chain and devised a new one by employing a smart combination of the non-uniform and the uniform signal

processing tools. Sampling, activity selection and local parameters extraction are basis of the proposed

processing chain. This approach is based on the level crossing sampling, it makes to pilot the system sampling

frequency and the processing activity by the input signal itself. The adopted sampling scheme produces the

non-uniformly spaced in time samples, which allows an easy activity selection and local features extraction of

the input signal.

In this context a novel technique is devised for analyzing the non-uniformly sampled signal spectrum. A

comparison of the proposed technique with the existing ones is made. It shows that the proposed technique out

performs the existing techniques in terms of computational efficiency and spectral quality.

Based upon the proposed processing chain the adaptive rate filtering and the adaptive resolution analysis

techniques are devised. The proposed solutions adapt their processing load and time-frequency resolution to

the input signal. It demonstrates that they achieve drastic computational gains while providing appropriate

quality results compared to the counter classical approaches. The processing chain performance is also

characterized in terms of its effective resolution. It is shown that for an appropriate choice of time resolution

and interpolation order, the proposed solution achieves a higher effective resolution compared to the counter

classical one.

Keywords—Level Crossing Sampling, Asynchronous Design, Activity Selection, Adaptive Rate Filtering,

Adaptive Resolution Analysis, Computational Complexity.

Saeed Mian Qaisar Grenoble INP i

Page 4: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé

RESUME

Les récentes sophistications dans le domaine des systèmes mobiles et des réseaux de capteurs demandent de

plus en plus de ressources de traitement. En vue de maintenir l'autonomie de ces systèmes, minimiser l'énergie

est devenu l'un des défis les plus difficile pour les industriels. La majorité des efforts pour atteindre cet

objectif est axée sur l'amélioration des techniques de conception des systèmes embarqués et de la technologie

de la batterie, mais très peu d'études cherchent à exploiter la nature du signal d'entrée. Partant de ce constat,

nous avons amélioré l'efficacité énergétique en adaptant l'activité du système en fonction des caractéristiques

locales du signal d'entrée. Pour cela, nous avons repensé complètement la chaîne de traitement existante et

avons élaboré un nouveau schéma de traitement exploitant une combinaison astucieuse des outils du

traitement du signal non-uniforme et uniforme. L'échantillonnage et la sélection des parties actives du signal

ainsi que l'extraction de paramètres locaux sont les bases de la chaîne de traitement proposée. Le schéma

d’échantillonnage retenu est de type « par traversée de niveaux » et fournit des données non uniformément

échantillonnées en temps. Il permet en outre de repérer les zones actives du signal. Ainsi, l’activité du signal

conditionne le schéma d’échantillonnage et nous renseigne sur les fréquences contenues dans les zones

actives, ce qui peut être utiliser à des fins de rééchantillonnage ou d’analyse spectrale.

Dans ce contexte, une nouvelle technique est proposée pour analyser le spectre des signaux échantillonnés

non-uniformément. Cette technique comparée aux techniques usuelles démontre de meilleures performances

en termes d'efficacité de calcul et de qualité spectrale.

Sur la base de la chaîne de traitement proposée, des techniques de filtrage adaptatifs et d’analyses à

résolution adaptative sont développées. Les solutions proposées sont de nature non stationnaire et elles

adaptent leur charge de traitement et la résolution en temps-fréquence en fonction du signal d'entrée. Il est

démontré que ces techniques permettent des gains drastiques en calcul tout en fournissant des résultats de

qualité appropriée par rapport aux approches classiques.

La chaîne de traitement proposée a été également caractérisée en terme de résolution effective. Par ailleurs,

il semblerait que pour un choix approprié de la résolution en temps et de l’ordre d'interpolation, la solution

proposée atteint une résolution effective plus élevée que celle de l'approche classique.

Mots clés —Echantillonnage par traversée des niveaux, Conception asynchrone, Sélection d’activité,

Filtrage aux taux adaptatives, Analyse spectrale à résolutions adaptatives, Complexité de calcul.

Saeed Mian Qaisar Grenoble INP ii

Page 5: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Acknowledgement

ACKNOWLEDGEMENT

During my thesis I have gained much in the way of knowledge. Any thesis is likely to have been

influenced by many people and being blessed I also rubbed my shoulders with many promising

researchers along the way.

I have carried out my PhD work in the Techniques de l’Informatique et de la Microélectronique pour

l’Architecture des systèmes intégrés (TIMA) laboratory of the Institut Polytechnique de Grenoble

(Grenoble INP), France, since September 2005. It is my pleasure to reserve this page as a symbol of

gratitude to all those who helped me realizing this milestone.

Saying thanks to PhD advisor is quite common in acknowledgements. I also dabbled into the vast

vocabulary of French, English and Urdu (my mother tongue). But the words like Merci, Thanks and

Shukria suddenly lost their quality. Prof. Marc Renaudin was my PhD advisor for the first three years

and becomes my co-advisor for the last few months. In fact, he deserves the appreciation beyond my

ability of expression. I owe an enormous debt of gratitude to him, for the confidence that he accorded

to me by accepting to supervise this thesis. I express my warm thanks for all his precious support,

valuable guidance and consistent encouragement throughout the course of my PhD.

I am greatly indebted to Dr. Laurent Fesquet, my present PhD advisor, for his friendly attitude,

technical and administrative support throughout the course of this research challenge. His constructive

remarks improved my skills for solving problems, writing algorithms and research papers.

I am grateful to the thesis jury members for their precious time which they have sacrificed for me. Thanks

to Prof. Pierre-Yves Coulon for presiding my thesis jury. I express my sincere gratitude to the thesis

reviewers, Prof. Olivier Sentieys, Prof. Gilles Fleury and Dr. Modris Greitans for this painstaking work.

Their useful comments and remarks have certainly improved the quality of my thesis.

My sincere acknowledgments also go to Prof. Dominique Dallet and his team for their positive

support. Their valueable suggestions helped me in the improvement of structure and organization of

the thesis presentation.

I would like to express my deepest gratitude to all my family members and especially to my parents

Mr. & Mrs. Mian Khalid Saeed, my wife Rabia and my little son Ali, whose prayers, love and

continuous support are my gospels of encouragement.

I am warmly thankful to all my colleagues and teachers whose friendship provided a base for the

accomplishment of this work. I cherish the wonderful time that we spent together. I also thank to all

the administrative staff of the laboratory TIMA and the Ecole Doctorale EEATS, who have always

been very gentle with me and helped me to get things done very smoothly.

Saeed Mian Qaisar

May 2009

Grenoble, France

Saeed Mian Qaisar Grenoble INP iii

Page 6: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

CONTENTS

List of Figures xi

List of Tables xiv

Chapter 1 Introduction

1.1 Context of the Study 1

1.2 Organization 2

Part-I Non-Uniform Signal Acquisition

Chapter 2 Non-Uniform Sampling

2.1 The Sampling Process 4

2.2 The Uniform Sampling 5

2.3 The Non-Uniform Sampling 6

2.3.1 Jittered Random Sampling (JRS) 7

2.3.2 Additive Random Sampling (ARS) 8

2.3.3 Uniform Sampling with Random Skip 9

2.3.4 Additive Pseudorandom Sampling (APRS) 9

2.3.5 Signal Crossing Sampling (SCS) 11

2.3.6 Level Crossing Sampling (LCS) 12

2.4 Resamblance between different Sampling Schemes 13

2.4.1 Resamblance between the Uniform Sampling and the ARS 13

2.4.2 Resamblance between the APRS and the LCS 13

2.5 The Sampling Non-Uniformity as a Tool 14

2.6 Why Level Crossing Sampling Scheme (LCSS)? 15

2.7 Sampling Criterion and Signal Reconstruction 16

2.8 Conclusion 17

Saeed Mian Qaisar Grenoble INP iv

Page 7: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

Chapter 3 Analog to Digital Conversion

3.1 The A/D Conversion 18

3.2 Performance Evaluation Parameters 21

3.2.1 Static Parameters 21

3.2.1.1 Offset Error 22

3.2.1.2 Gain Error 22

3.2.1.3 DNL (Differential Nonlinearity) 23

3.2.1.4 INL (Integral Nonlinearity) 23

3.2.2 Dynamic Parameters 24

3.2.2.1 SNR (Signal to Noise Ratio) 24

3.2.2.2 SFDR (Spurious Free Dynamic Range) 26

3.2.2.3 THD (Total Harmonic Distortion) 27

3.2.2.4 FoM (Figure of Merit) 27

3.3 The ADC Architectures 28

3.3.1 Nyquist rate ADCs 28

3.3.1.1 Flash ADC 28

3.3.1.2 SAR (Successive Approximation Register) ADC 30

3.3.1.3 Integrating ADC 31

3.3.1.4 Some Other Realization 32

3.3.2 Oversampling ADCs 33

3.3.2.1 The Oversampling Effect on the ADC SNR 33

3.3.2.2 Classical Oversampling ADCs 34

3.3.2.3 Sigma-Delta ADCs 35

3.4 ADCs performance 38

3.5 Conclusion 40

Chapter 4 Level Crossing Analog to Digital Conversion

4.1 The Level Crossing A/D Conversion 41

4.2 Principle of the LCADC 42

4.2.1 The LCADC SNR for Different Signals 44

4.2.1.1 Monotone Sinusoid 44

4.2.1.2 Signal with Constant Spectral Density 45

Saeed Mian Qaisar Grenoble INP v

Page 8: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

4.2.1.3 Speech Signal 46

4.2.1.4 Audio Signal 47

4.2.2 The Sampling Criterion and the Tracking Condition 48

4.3 The Level Crossing A/D Conversion Realization 49

4.3.1 Synchronous Level Crossing A/D Conversion 49

4.3.2 Asynchronous Level Crossing A/D Conversion 51

4.3.2.1 AADC (Asynchronous Analog to Digital Converter) 51

4.3.2.2 LCF-ADC (Level Crossing Flash A/D Converter) 54

4.3.2.3 Microprocessor Based Level Crossing A/D Converter 56

4.4 LCADCs Comparison with Other Oversampling Convertors 58

4.5 Conclusion 59

Part-II Proposed Techniques

Chapter 5 Activity Selection

5.1 The Windowing Process 61

5.1.1 ASA (Activity Selection Algorithm) 63

5.1.2 EASA (Enhanced Activity Selection Algorithm) 64

5.2 Features of the ASA and the EASA 65

5.3 Conclusion 68

Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

6.1 Spectral Analysis 70

6.1.1 Spectral Leakage 72

6.1.2 Spectral Analysis of the Non-Uniformal Sampled Signal 73

6.1.2.1 GDFT (General Discrete Fourier Transform) 73

6.1.2.2 Lomb’s Algorithm 75

6.2 Spectral Analysis of the Level Crossing Sampled Signal 77

6.2.1 Spectral Analysis Based on Activity selection and Resampling 78

6.2.1.1 Adaptive Rate Sampling 79

6.2.1.2 NNRI (Nearest Neighbour Resampling Interpolation) 80

6.2.1.3 Adaptive Shape Windowing 82

Saeed Mian Qaisar Grenoble INP vi

Page 9: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

6.2.2 Illustrative Example 82

6.2.2.1 Comparison of the Proposed Technique with the GDFT and the

Lomb’s Algorithm 84

6.2.2.2 Comparison of the Proposed Technique with the Classical

Approach 87

6.2.2.3 Mean Square Deviation 89

6.2.2.4 Resampling Error 90

6.3 Conclusion 90

Chapter 7 Signal Driven Adaptive Rate Filtering

7.1 The Filtering Process 92

7.2 The Multirate Filtering 94

7.2.1 Decimation 94

7.2.2 Interpolation 95

7.3 The Adaptive Rate Filtering 96

7.3.1 Adaptive Rate Sampling 97

7.3.2 The ARCD (Activity Reduction by Chosen Filter Decimation) Technique 98

7.3.2.1 1st Filtering Case 100

7.3.2.2 2nd

Filtering Case 100

7.3.3 The ARCR (Activity Reduction by Chosen Filter Resampling) Technique 101

7.3.4 The ARRD (Activity Reduction by Reference Filter Decimation) Technique 102

7.3.4.1 1st Filtering Case 103

7.3.4.2 2nd

Filtering Case 104

7.3.5 The ARRR (Activity Reduction by Reference Filter Resampling) Technique 105

7.4 Illustrative Example 106

7.5 Computational Complexity 109

7.5.1 Complexity of the Classical FIR Filtering 109

7.5.2 Complexity of the ARCD Technique 110

7.5.3 Complexity of the ARCR Technique 111

7.5.4 Complexity of the ARRD Technique 111

7.5.5 Complexity of the ARRR Technique 112

7.5.6 Complexity Comparison of the Proposed Techniques with the Classical

Approach 112

Saeed Mian Qaisar Grenoble INP vii

Page 10: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

7.5.7 Complexity Comparison among the Proposed Techniques 114

7.5.7.1 Comparison between the ARCD and the ARCR Techniques 114

7.5.7.2 Comparison between the ARRD and the ARRR Techniques 114

7.6 Processing Error 115

7.6.1 Approximation Error 115

7.6.2 Filtering Error 116

7.7 Enhanced Adaptive Rate Filtering Techniques 117

7.7.1 EARD (Enhanced Activity Reduction by Filter Decimation) Technique 118

7.7.1.1 1st Filtering Case 118

7.7.1.2 2nd

Filtering Case 119

7.7.1.3 3rd

Filtering Case 119

7.7.2 EARR (Enhanced Activity Reduction by Filter Resampling) Technique 119

7.7.3 ARDI (Activity Reduction by Filter Decimation/Interpolation) Technique 120

7.7.4 Complexity of the EARD, the EARR and the ARDI Techniques 122

7.7.5 Complexity Comparison with the Classical Approach 123

7.7.6 Complexity Comparison with the ARCD, the ARCR, the ARRD and the

ARRR Techniques 124

7.7.7 Complexity Comparison among the EARD, the EARR and the ARDI

Techniques 125

7.7.8 Processing Error of the EARD, the EARR and the ARDI Techniques 125

7.7.8.1 Approximation Error 125

7.7.8.2 Filtering Error 126

7.8 Adaptive Rate Filter Architecture 126

7.8 Conclusion 128

Chapter 8 Signal Driven Adaptive Resolution Analysis

8.1 Time Frequency Analysis 129

8.2 Proposed Adaptive Resolution Short-Time Fourier Transform 131

8.2.1 Adaptive Rate Sampling 132

8.2.2 Adaptive Shape Windowing 133

8.2.3 Adaptive Resolution Analysis 133

8.3 Illustrative Example 134

8.4 Computational Complexity 137

Saeed Mian Qaisar Grenoble INP viii

Page 11: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

8.5 Resampling Error 139

8.6 Conclusion 139

Part-III Design and Performance Evaluation

Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital

Converter

9.1 The SNR (Signal to Noise Ratio) 141

9.1.1 Theoratical SNR of an ADC 142

9.1.2 Practical SNR of an ADC 142

9.1.3 The ADC Effective Resolution 143

9.2 The ARADC SNR 144

9.2.1 The LCADC SNR 145

9.2.1.1 Theoratical SNR of a LCADC 145

9.2.1.2 Practical SNR of a LCADC 145

9.2.2 The Activity Selection Algorithm SNR 148

9.2.3 The Resampler SNR 148

9.3 The Simulation Results 149

9.3.1 The SNR of an ideal LCADC 149

9.3.2 The SNR of a practical LCADC 151

9.3.3 The SNR of an ARADC 153

9.4 Conclusion 156

Chapter 10 Performance of the Proposed Techniques for Real Signals

10.1 The Signal Driven Adaptive Rate Filtering 157

10.2 The Adaptive Resolution Short-Time Fourier Transform 160

10.3 The Adaptive Rate Analog to Digital Coverter 162

10.4 Conclusion 165

Saeed Mian Qaisar Grenoble INP ix

Page 12: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Contents

Conclusion and Prospects 166

Publications 169

Annex-I Asynchronous Circuits Principle 171

Bibliography 175

Summary in French 183

Saeed Mian Qaisar Grenoble INP x

Page 13: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

List of Figures

List of Figures

Figure 2.1. Jittered random sample point process.

Figure 2.2. Additive random sample point process

Figure 2.3. Uniform sampling with random skips sample point process.

Figure 2.4. Additive pseudo random sample point process.

Figure 2.5. Sine wave crossing sample point process

Figure 2.6. Level crossing sample point process

Figure 3.1. Uniform deterministic quantization: original and quantized signals

Figure 3.2. M-bit resolution A/D conversion process

Figure 3.3. Ideal M-bit ADC quantization error

Figure 3.4. Quantization error as a function of time.

Figure 3.5. Offset error of an ADC.

Figure 3.6. Gain error of an ADC.

Figure 3.7. DNL error of an ADC.

Figure 3.8. INL error of an ADC.

Figure 3.9. Summary of the ENOB of the recent ADCs.

Figure 3.10. SFDR error of an ADC.

Figure 3.11. Summary of the SFDRbits of the recent ADCs.

Figure 3.12. Flash ADC architecture.

Figure 3.13. SAR ADC architecture.

Figure 3.14. Single slope integrating ADC architecture.

Figure 3.15. Single slope integrating ADC time voltage plot.

Figure 3.16. Quantization noise power spectral density, filled area corresponds to the Nyquist rate converter

and the unfilled area corresponds to the oversampled converter.

Figure 3.17. Classical oversampling ADC system.

Figure 3.18: First order sigma-delta ADC system.

Figure 3.19. Noise spectrum of sigma-delta converter.

Figure 3.20. ADC architectures cover different ranges of sample rate and resolution.

Figure 3.21. Resolution vs. power and chip area for various ADC architectures.

Figure 4.1: Time quantization error of the LCSS

Figure 4.2. Speech spectrum.

Figure 4.3. Audio spectrum.

Figure 4.4. Band limiting the input signal to ensure the tracking condition and the sampling criterion.

Figure 4.5. Level crossing ADC block diagram.

Figure 4.6: Level crossing ADC architecture.

Figure 4.7. The AADC architecture.

Figure 4.8. The AADC design flow for a targeted application.

Figure 4.9: A two-Bit LCF-ADC architecture.

Figure 4.10. Level crossing sampling unit structure.

Figure 4.11. Architecture of the level crossing A/D converter using one µP.

Figure 4.12. Architecture of the level crossing A/D converter using two µP.

Figure 5.1. The sequence of a classical windowing process.

Figure 5.2. Non-uniformly sampled signal obtained with the LCADC.

Figure 5.3. The sequence of an activity selection process.

Figure 5.4. Input signal (top), selected signal obtained with ASA (middle) and the selected signal obtained with

EASA (bottom).

8

8

9

10

11

12

19

19

20

21

22

22

23

24

25

26

27

29

30

31

31

33

34

35

37

39

39

43

47

47

48

49

50

51

53

54

56

57

57

61

62

63

66

Saeed Mian Qaisar Grenoble INP xi

Page 14: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

List of Figures

Figure 6.1. Capturing integral number of cycles (top-left), capturing fractional number of cycles (bottom-left),

spectrum of the first captured block (top-right) and spectrum of the second captured block (bottom-right).

Figure 6.2. Smooth signal obtained with the cosine window function (left) and spectrum of the cosine

windowed signal (right).

Figure 6.3. Spectra obtained with the GDFT. The case of uniform sampling (top-left), the case of APRSS and

125 samples (top-right), the case of APRSS and 375 samples (bottom-left) and the case of APRSS and 750

samples (bottom-right).

Figure 6.4. Power spectrum obtained with the Lomb’s periodogram, the case of APRSS and 125 samples along

with the highest analysed frequency of 650 hz.

Figure 6.5. Block diagram of the proposed spectral analysis system.

Figure 6.6. Description of the NNRI process.

Figure 6.7. Trn is the interval between two resampled data points. Tn1 is the interval between the non-uniform

samples used for the NNRI and Tn2 is the interval between the non-uniform samples used for the S&H

interpolation. ‘o’ symbol is used for the non-uniform data and ‘+’ symbol is used for the resampled data.

Figure 6.8: The input signal (top) and the selected signal (bottom).

Figure 6.9. Block diagram of the comparison methodology.

Figure 6.10. Spetra obtained by applying the GDFT (top), the Lomb’s algorithm (middle) and the proposed

technique (bottom).

Figure 6.11. Block diagram of the classical spectral analysis system.

Figure 6.12. Spectrum obtained in the classical case.

Figure 7.1. The decimation process.

Figure 7.2. The interpolation process.

Figure 7.3. The classical time-invarient FIR filter model (top) and its equivalent multirate FIR filter model

(bottom).

Figure 7.4. Block diagram of the proposed adaptive rate filtering techniques.

Figure 7.5. Block diagram of the ARCD technique.

Figure 7.6. Flow chart of the ARCD technique.

Figure 7.7. Block diagram of the ARCR technique.

Figure 7.8. Flow chart of the ARCR technique.

Figure 7.9. Block diagram of the ARRD technique.

Figure 7.10. Flow chart of the ARRD technique.

Figure 7.11. Block diagram of the ARRR technique.

Figure 7.12. Flow chart of the ARRR technique.

Figure 7.13. The input signal (left) and the selected signal obtained with the ASA (right).

Figure 7.14. Flow chart of the EARD technique.

Figure 7.15. Flow chart of the EARR technique.

Figure 7.16. Flow chart of the ARDI technique

Figure 7.17. System level architecture.

Figure 8.1: Block diagram of the STFT.

Figure 8.2. Block diagram of the proposed STFT.

Figure 8.3. Flow chart of deciding Frsi for Wi.

Figure 8.4. The input signal (left) and the selected signal (right).

Figure 8.5. The ARSTFT of the selected windows.

Figure 9.1. FFT output of an ideal 12-bit ADC.

Figure 9.2. Block diagram of the ARADC.

Figure 9.3. 3-bit DAC block diagram.

Figure 9.4. The ARADC SNR curves obtained with the NNR interpolation, for Ttimer = {22, 21, ….., 2-5} µ

seconds and by varying M between [3 ; 8] for each value of Ttimer .

Figure 9.5. The ARADC SNR curves obtained with the linear interpolation, for Ttimer = {22, 21, ….., 2-5} µ

seconds and by varying M between [3 ; 8] for each value of Ttimer .

Figure 9.6. The SNR curves obtained in the case of the classical ADC and the ARADC by varying M between

[3 ; 8].

72

73

75

77

78

80

81

83

84

85

87

87

94

95

96

97

99

101

102

102

103

104

105

105

106

119

120

121

127

130

131

133

135

137

143

144

152

154

154

155

Saeed Mian Qaisar Grenoble INP xii

Page 15: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

List of Figures

Figure 10.1. On the top, the input speech signal (10.1-a), the selected signal obtained with the ASA (10.1-b)

and a zoom of the second window W2 (10.1-c). On the bottom, a spectrum zoom of the filtered signal laying in

W2, obtained with the reference filtering (10.1-d), with the EARD (10.1-e) and with the EARR (10.1-f)

respectively.

Figure 10.2. The input signal frequency pattern.

Figure 10.3. The input speech signal (top), the selected signal obtained with the ASA (middle) and the

windowed signal obtained in the classical case (bottom).

160

161

164

Saeed Mian Qaisar Grenoble INP xiii

Page 16: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

List of Tables

List of Tables

Table 3.1. Different ADC architectures.

Table 4.1. Characteristics of the classical and the level crossing A/D conversion.

Table 4.2. Electrical characteristics of the level crossing ADC.

Table 4.3. Electrical characteristics of the AADC.

Table 4.4. LCF-ADC simulation data.

Table 5.1. Summary of the input signal activities.

Table 5.2. Summary of the selected windows parameter obtained with the ASA.

Table 5.3. Summary of the selected windows parameter obtained with the EASA.

Table 6.1. Summary of the input signal activities.

Table 6.2. Summary of the selected windows parameter.

Table 6.3. Summary of the computational gain compared to the GDFT and the Lomb’s algorithm.

Table 6.4. Summary of the computational gain compared to the classical approach.

Table 6.5. Comparison to the mean square deviation between Trn and Tnj for the NNRI and the S&H, for each

selected window.

Table 6.6. Mean resampling error for each selected window.

Table 7.1. Summary of the input signal active parts.

Table 7.2. Summary of the reference filters bank parameters, implemented for the ARCD and the ARCR.

Table 7.3. Summary of the reference filters parameters, implemented for the ARRD and the ARRR

Table 7.4. Summary of the selected windows parameters.

Table 7.5. Values of the Frefc, Frsi, Nri, Di and Pi for each selected window in the ARCD technique.

Table 7.6. Values of the Frefc, Frsi, Nri, di and Pi for each selected window in the ARCR technique

Table 7.7. Values of the Frsi, Nri, Di and PP

i for each selected window in the ARRD technique.

Table 7.8. Values of the Frsi, Nri, di and PP

i for each selected window in the ARRR technique

Table 7.9. Computational gain of the ARCD over the classical one for different time spans if x(t).

Table 7.10. Computational gain of the ARCR over the classical one for different time spans if x(t).

Table 7.11. Computational gain of the ARRD over the classical one for different time spans if x(t).

Table 7.12. Computational gain of the ARRR over the classical one for different time spans if x(t).

Table 7.13. Mean approximation error for each selected window for the ARCD, the ARCR, the ARRD and the

ARRR techniques.

Table 7.14. Mean filtering error for each selected window for the ARCD, the ARCR, the ARRD and the

ARRR techniques.

Table 7.15. Summary of the reference filters bank parameters, implemented for the EARD, the EARR and the

ARDI techniques.

Table 7.16. Values of the Frefc, Frsi, Nri, Di and Pi for each selected window in the EARD technique.

Table 7.17. Values of the Frefc, Frsi, Nri, di and Pi for each selected window in the EARR technique

Table 7.18. Values of the Frefc, Frsi, Nri, di/ ui and Pi for each selected window in the ARDI technique.

Table 7.19. Computational gain of the EARD over the classical one for different time spans if x(t).

Table 7.20. Computational gain of the EARR over the classical one for different time spans if x(t).

Table 7.21. Computational gain of the ARDI over the classical one for different time spans if x(t).

Table 7.22. Mean approximation error for each selected window for the EARD, the EARR and the ARDI.

Table 7.23. Mean filtering error for each selected window for the EARD, the EARR and the ARDI.

32

44

51

54

55

66

67

67

82

83

86

89

89

90

106

107

107

107

108

108

108

108

113

113

113

113

116

117

121

122

122

122

124

124

124

125

126

Saeed Mian Qaisar Grenoble INP xiv

Page 17: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

List of Tables

Saeed Mian Qaisar Grenoble INP xv

Table 8.1. Summary of the input signal active parts.

Table 8.2. Summary of the selected windows parameters.

Table 8.3. The selected windows time and frequency resolution.

Table 8.4. Summary of the computational gains.

Table 8.5. Mean resampling error for each selected window.

Table 9.1. The ideal LCADC SNR for Ttimer = 1µ second and variable M.

Table 9.2. The ideal LCADC SNR for M = 3 and variable Ttimer.

Table 9.3. The LCADC real SNR for M = 3 and variable Ttimer.

Table 10.1. Summary of the selected windows parameters obtained with the ASA.

Table 10.2. Summary of the reference filters bank parameters, implemented for the EARD, the EARR and the

ARDI techniques.

Table 10.3. Values of Frefc, Frs2, Nr2, d2/u2 and P2 for the EARD, the EARR and the ARDI techniques.

Table 10.4. Computational gains of the EARD, the EARR and the ARDI techniques over the classical one, for

W2.

Table 10.5. Summary of the selected windows parameters.

Table 10.6. The selected windows time-frequency resolution.

Table 10.7. Summary of the selected windows parameters.

134

135

136

138

139

150

151

152

158

158

159

159

161

161

163

Page 18: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Saeed Mian Qaisar Grenoble INP xvi

Page 19: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Chapter 1 Introduction

Chapter 1

INTRODUCTION

1.1 Context of the Study

With recent modernization the mobile systems are becoming essential parts of our lives. The aim

of providing better subscriber services is demanding further sophistications in the area of mobile

systems. The phase of realising it requires more and more processing resources. While fulfilling

these growing demands maintaining the mobile system size, cost, processing noise,

electromagnetic emission and especially power consumption –as they are most often powered by

batteries– are becoming the difficult industrial challenges. Most of the efforts to achieve these

goals are focused on improving the embedded systems design, the technological advancement

and the battery technology but very few studies target to exploit the input signal time-varying

nature. The proposed work is a contribution in the development of smart mobile systems. It aims

to achieve the efficient systems by smartly adapting their parameters to the input signal local

characteristics. This can be achieved by smartly reorganizing the mobile systems associated

signal processing theory and architecture. The idea is to employ the event driven signal

processing with clock-less circuit design, in order to reduce the system processing activity and

energy consumption.

Almost all natural signals like speech, seismic and biomedical are time varying in nature.

Moreover, the man made signals like Doppler, ASK (Amplitude Shift Keying), FSK (Frequency

Shift Keying) etc. also lie in the same category. The spectral contents of these signals vary with

time, which is a direct consequence of the signal generation process [96].

The classical systems are based on the Nyquist signal processing architectures. They do not

exploit the signal local variations. Indeed, they acquire and process the signal at a fixed rate

without taking into account the intrinsic signal nature. Moreover they are highly constrained due

to the Shannon theory especially in the case of low activity sporadic signals like

electrocardiogram, phonocardiogram, seismic etc. It causes to capture and to process a large

number of samples without any relevant information, a useless increase of the system activity and

of its power consumption.

The power efficiency can be enhanced by smartly adapting the system processing load according

to the signal local variations. In this end, a signal driven sampling scheme, which is based on

“level-crossing” is employed. The LCSS (Level Crossing Sampling Scheme) [26] adapts the

sampling rate by following the input signal local characteristics [39, 40]. Hence, it drastically

reduces the activity of the post processing chain, because it only captures the relevant information

[44-52]. In this context, LCADCs (LCSS Based Analog to Digital Converters) have been

developed [37, 38, 43, 95, 97, 98]. In [37, 38, 43, 95, 97, 98], authors have shown the advantages

of the LCADCs over the classical ones. The major advantages are the reduced activity, the power

saving, the reduced electromagnetic emission and the processing noise reduction. Inspiring from

Saeed Mian Qaisar Grenoble INP 1

Page 20: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Chapter 1 Introduction

these interesting features, the LCADCs are employed for digitizing the input signal in the

proposed case.

The data obtained with the LCADC is non-uniformly spaced in time; therefore it can not be

processed or analyzed by employing the classical techniques [1, 2]. In recent years several

valuable studies on processing and analysis of the non-uniformly sampled signal obtained with

the LCADCs have been made, a few examples are [13, 40, 52, 53]. It shows that the LCADC

output can be used directly for further non-uniform digital treatment. However, in this PhD

dissertation, the non-uniformity of the sampling process, which yields information on the signal

local features, is employed to select the relevant signal parts only. Furthermore, characteristics of

each selected signal part are analyzed and are employed later on to adapt the proposed system

parameters accordingly. This selection and local-features extraction process is named as the ASA

(activity selection algorithm) [44, 48]. The selected signal obtained with the ASA is resampled

uniformly before proceeding towards further digital signal processing or analysis steps. The

resampler acts as a bridge between the non-uniform and the uniform signal processing tools and

smartly allows employing the interesting features of both sides in the proposed solutions.

The LCADC, the ASA and the resampler are fundamental components of the proposed solutions.

They jointly make the proposed approach basis, which are activity acquisition and selections

along with local parameters extraction. Based upon the proposed approach the smart

signal processing and analysis techniques are devised [44-51]. The proposed solutions are

of time varying nature and adapt their processing load and system parameters like

effective resolution, sampling frequency, time-frequency resolution etc. by following the

input signal characteristics. It is realized by employing a smart combination of the non-

uniform and the uniform signal processing tools, which promise a drastic computational

gain of the proposed solutions, while providing appropriate quality results compared to

the counter classical approaches.

1.2 Organization

This thesis dissertation is mainly splitted into three parts. The first part is entitled as the non-

uniform signal acquisition and it comprises chapters 2 to 4.

An overview of the non-uniform sampling process is made in chapter 2. The aspect of sampling

non-uniformity as a tool is also described. Reasons of choosing the LCSS in the proposed case

are also argued. Moreover, the LCSS interesting features are also discussed.

In chapter 3, principles of the analog to digital conversion process are reviewed. Several ADC

(Analog to Digital Converter) performance parameters are also described. Moreover, the main

features of the different ADCs architectures are presented.

The main concepts of the LCADCs are described in chapter 4. The asynchronous design is a

natural choice for implementing the LCADCs [43, 97, 98]. In this context, the asynchronous

circuit principle is briefly reviewed. Some successful LCADCs realizations are also studied.

The second part is named as the proposed techniques and it contains chapters 5 to 8.

Saeed Mian Qaisar Grenoble INP 2

Page 21: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Chapter 1 Introduction

In chapter 5, two novel techniques for selecting and windowing the relevant parts of the level

crossing sampled signal are devised. Following their activity selection feature they are named as

the activity selection algorithms [44-51]. The appealing features of the proposed techniques are

demonstrated.

The level crossing sampled signal is non-uniformly partitioned in time, therefore it can not be

properly analyzed by employing the classical digital signal analysis tools [1, 2]. An efficient

solution for analyzing the level crossing sampled signal is proposed in chapter 6. The proposed

technique smart features are illustrated. Its comparison with the GDFT (General Discrete Fourier

Transform), the Lomb’s algorithm and the classical spectral analysis approach, in terms of the

spectral quality and the computational complexity is also made.

Based upon the proposed approach the adaptive rate filtering techniques are devised in chapter 7.

They are based on the principle of activity selection and the local features extraction. It makes

them to adapt their sampling frequency and the filter order by following the input signal local

variations. The computational complexities of the proposed techniques are deduced and

compared, among and to the classical case. It is shown that the proposed solutions result into a

drastic gain in the computational efficiency and hence in the processing power compared to the

classical approach.

In chapter 8, an adaptive resolution short-time Fourier transform is devised. The activity selection

and the local parameters extraction are its basis. They make it to adapt its sampling frequency and

the window function (length plus shape) according to the input signal local variations. This

adaptation results into the proposed technique appealing features, which are the adaptive time-

frequency resolution and the computational efficiency compared to the classical STFT (Short-

Time Fourier Transform).

The third part is called as the design and performance evaluation. It consists of chapters 9 and 10.

In combination with the LCADC, the Activity Selection Algorithm and the resampler form the

basic processing chain of the proposed solutions [44-51]. A novel method of characterizing the

proposed chain performance is described in chapter 9. The proposed method authenticity is

confirmed by comparing the simulation results with the ones, obtained with theoretical formulas.

A criterion for properly choosing the different system parameters in the aim of acquiring the

desired effective resolution is also described.

Chapter 10 is devoted for evaluating the proposed techniques performance for real signals.

Finally, some concluding remarks and future prospects are presented.

Saeed Mian Qaisar Grenoble INP 3

Page 22: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

Chapter 2

NON-UNIFORM SAMPLING

The real world originated signals are naturally analog. It is frequently required to process these

signals in order to achieve certain objectives. Processing can be performed directly by analog

electronics, however most of the time these signals are digitized, which allows their digital

processing. Such a transformation has many well known advantages and is usually preferable [1,

2].

The digitization mainly consists of two elementary processes: the sampling and the quantization

[1, 2]. This chapter deals with the sampling process. The sampling theory has a vast history and a

lot of valuable literature is available on it, a few examples are [3-13]. The goal of this chapter is

not to review the entire field. However, in order to keep the document self contained, the

sampling process in general and the non-uniform sampling processes in particular are briefly

reviewed. The aspect of sampling non-uniformity as a tool is also described. The aim of this

thesis work is to achieve a smart mobile processing, which results into the smart mobile systems.

In this context, the level crossing sampling scheme is employed, in the proposed case [26, 29-31].

The reasons for this choice are argued. Moreover the sampling criterion, which ensures the proper

reconstruction of the level crossing sampled signal, is also discussed.

2.1 The Sampling Process

Sampling is the process of converting an analog signal into its discrete representation. In the time

domain, it is achieved by multiplying the continuous time signal x(t) with a sampling function

sF(t). According to [5-8], the generalized sF(t) model is given by Equation 2.1.

! !"#

$#%

$%n

nF ttts & (2.1)

Here, (t-tn) is the Dirac function and {tn} is the sampling instant sequence. Thus, a sampled

signal xs(t) can be represented by Equation 2.2.

! ! !"#

$#%

$%n

ns tttxtx &. (2.2)

In the frequency domain, sampling is the convolution product between spectra of the analog

signal and of the sampling function. If SF(f) is the Fourier transform of sF(t), then it can be

represented by Equation 2.3.

Saeed Mian Qaisar Grenoble INP 4

Page 23: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

! "#

$#%

$%n

ftj

FnefS

'2 (2.3)

Finally the sampled signal Fourier transform Xs(f) can be represented by Equation 2.4.

! ! !fSfXfX Fs *% (2.4)

Here, X(f) is the input analog signal spectrum. The sampling process is directly influenced by the

characteristics of {tn}. Depending upon the distribution of {tn}, the sampling process is mainly

splitted into the uniform and the non-uniform categories.

2.2 The Uniform Sampling

This is the classical sampling process, which was proposed by Shannon in 1949 [3]. While

formulating his distortion theory, Shannon needed a general mechanism for converting an analog

signal into a sequence of numbers, which leads him to state the classical sampling theorem [3].

This sampling theorem is the base of almost all existing digital signal processing (DSP) theory,

which assumes that the input signal is bandlimited; the sampling is regular and the sampling rate

respect the Shannon sampling criterion.

Theoretically, the classical sampling is a pure deterministic and periodic process. In this case, the

sampling instants are uniformly spaced. Thus, the time interval between two consecutive samples

Ts is unique. In literature, Ts is known as the sampling period. The uniqueness of Ts results into

the sampling process periodicity. Due to this feature, this sampling process is also known as the

periodic or the equidistant sampling process. The sampling model can be defined mathematically

as follows.

,.....2,1,0, %% nTnt sn (2.5)

For this case, sF(t) can be expressed as a sequence of uniformly distributed infinite delta impulses.

The process is expressed by Equation 2.6.

! !"#

$#%

$%n

sF nTtts & (2.6)

Following Equation 2.6, Equations 2.2, 2.3 and 2.4 become as follow in this case.

! ! !"#

$#%

$%n

ss nTttxtx &. (2.7)

! "#

$#%

$%n

s

s

F nFfT

fS )(1

& (2.8)

Saeed Mian Qaisar Grenoble INP 5

Page 24: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

! "#

$#%

$%n

s

s

s nFfXT

fX )(1

(2.9)

In Equations 2.8 and 2.9, Fs = 1/Ts is the sampling frequency, these Equations are achieved by

using the frequency shifting property of the Fourier transform [41], which shows that Xs(f) is

obtained by shifting and repeating X(f) forever at the integral multiples of Fs (cf. Equation 2.9).

These repeated copies are known as images of the original signal spectrum.

Shannon has proved that if x(t) contains no frequency higher than fmax, it can be completely

reconstructed by giving its ordinates at a series of points spaced 1/(2.fmax) seconds apart. In

essence it employs the following condition on the sampling frequency Fs.

max.2 fFs (2.10)

This criterion on Fs was proposed by Shannon and later it was further developed by Nyquist. It is

the reason that sampling at a frequency, which is exactly equal to the two times of fmax is called as

the Nyquist sampling frequency.

max.2 fFNyq ! (2.11)

In fact by fulfilling condition 2.10, the spectral images (cf. Equation 2.9) do not alias in xs(t)

spectrum. Thus, x(t) can be recovered from xs(t) by employing the Poisson formula, which states

that filtering xs(t) with an ideal low pass filter results into x(t).

" # " # " #" #" #$

%

&%! &&

!n ss

sss

nTtF

nTtFTnxtx

.

.sin..

''

(2.12)

The sampling function, given by Equation 2.6, is the one most often employed in the real life

applications. However, Equation 2.6, represents a theoretical abstraction of this sampling process.

In practice, the sampling could never be performed in a pure deterministic manner. The real

sampling instants always fluctuate from their expected locations on the time axis, which causes

the non-uniformity in the process [7, 8].

2.3 The Non-Uniform Sampling

In the case of non-uniform sampling processes, the sampling instants can have any possible

distribution [7-13]. However, the generalized form of the sampling function (cf. Equation 2.1)

remains always valid. The non-uniformity may occur naturally in the sampling process. One can

observe such an irregularity in the astronomical data, which depends upon the weather conditions,

earth orbit, location of the observation center etc [13]. Similarly in seismology, the seismic

observations take place at irregular time periods, depending on the incoming seismic waves. In

the same way, in geophysics, while categorizing the glass layers on a glacier, the height of the

glass layers can vary from one year to another, helping in categorizing the associated climate

conditions.

The non-uniformity can also occur due to the practical realization imperfections. Practically in the

classical case, the sampling process is triggered by a clock signal, which generates only the finite

Saeed Mian Qaisar Grenoble INP 6

Page 25: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

precision sampling instants, which causes deviation between the ideal and the practical sampling

times. This phenomenon is known as the sampling jitter [18].

Some times the non-uniformity is deliberately introduced into the sampling process; it leads to

achieve certain advantages, which are not attainable with a classical sampling process [14-26]. A

description of this aspect is given in Section 2.5.

The sampling process may also be considered as a sequence of events taking place at some time

instants {tn}. Graphically, this process can be depicted as a stream of points, in other words, as a

point process [8]. Various sampling point processes, with significantly different features have

been suggested in the literature [4-26]. Broadly speaking, the sampling point process could be

deterministic or non-deterministic [8]. The uniform sampling belongs to the deterministic point

process, while certain non-uniform samplings like Jittered Random Sampling (JRS), Additive

Random Sampling (ARS), etc. belong to the non-deterministic point process.

Non-uniformity in the sampling process can be introduced directly or indirectly. To realize direct

non-uniformity, signal samples are taken at random time instants, which are linked to pulses,

specially generated in a random way [14-17]. The sampling schemes, based on such a process are

JRS [18], ARS [19], Uniform Sampling with Random Skip [11, 13], Additive Pseudorandom

Sampling (APRS) [20] etc. For the indirect non-uniformity, a reference function is followed for

the sampling process, which introduces indirect non-uniformity in the sampling process [21-26].

The sampling schemes laying in this category are the Signal Crossing Sampling (SCS) [21-24],

the Level Crossing Sampling (LCS) [25-26] etc.

The above discussed examples of the non-uniform sampling schemes are briefly reviewed in the

following subsections.

2.3.1 Jittered Random Sampling (JRS)

Jitter describes the fluctuation between the real and the expected sampling time instants. This

phenomenon occurs in all practical realizations of the classical sampling process. It happens due

to the phase noise in the sampling clock and the finite precision within the sampling device,

which cause jitter in the recovered signal timings [12, 18]. To represent this phenomenon, the

uniform sampling model, given by Equation 2.5, is modified as follow.

,.....2,1,0, ((!)! nTnt nsn * (2.13)

Here, Ts is the sampling interval and { n} is a realization of a set of independent, identically

distributed, zero mean random variables with a probability distribution function p( ) and variance

!2 [12]. The JRS process is shown on Figure 2.1.

Saeed Mian Qaisar Grenoble INP 7

Page 26: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

xn-1

xn

tn=n.Ts+ n

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn=n.Ts

xn-1

xn

tn=n.Ts+ n

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn=n.Ts

Figure 2.1. Jittered random sample point process, ‘……’ represents the theoretical sample points and ‘____’

represents the real sample points.

2.3.2 Additive Random Sampling (ARS)

This sampling scheme was proposed by Shapiro and Silverman as a conventional tool to describe

the randomized sampling process [19]. The ARS sampling model is given as follow:

,.....2,1,0,1 ((!)! & ntt nnn * (2.14)

Here, tn is the current sampling instant and tn-1 is the previous one. { n} is a realization of a set of

independent, identically distributed positive random variables with a probability distribution

function p( ), variance !2 and mean " [19]. In this case, the average sampling frequency Fmean is

given as: Fmean=1/". The ARS sampling process is shown in Figure 2.2.

xn-1

xn

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn-1 tn

n

xn-1

xn

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn-1 tn

n Figure 2.2. Additive random sample point process.

The randomness, introduced in the sampling process can be controlled by the ratio ! / ". Here, !

is the standard deviation of { n}. This ratio has to be set according to the targeted application in

mind [19]. The introduced randomness should be significant enough to achieve the desired goal.

Saeed Mian Qaisar Grenoble INP 8

Page 27: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

However, it should be kept in mind that the increase in randomness can result into an increased

statistical error. Thus, according to the targeted application, the least randomization should be

introduced into the sampling process, which fulfills the desired goal [7, 8, 12, 19].

2.3.3 Uniform Sampling with Random Skip

Some times in the practice of uniform sampling process, the signal sample values at some

sampling instants are excluded. Such a phenomenon might take place either in the result of

system failure or it might be carried out intentionally [7, 8, 10-13]. The periodic sampling process

with random skips can result into some advantages [11]. This sampling process is shown on

Figure 2.3.

xn-1

xn

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn-1 tn tn+1

xn-1

xn

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn-1 tn tn+1

Figure 2.3. Uniform sampling with random skips sample point process, ‘____’ represents the uniform

sample points and ‘……’ represents the randomly skipped sample points.

There is a number of sampling point processes which belong to this category. The major

difference among them is the pattern of skipping the sampling points. This sampling scheme is

based on a periodic point process with randomly or pseudo-randomly omitted sampling points. In

the case of random skips, if the sampling instants take place with probabilities {pn}, then the

probability that no sample will be taken at the sampling instant tn is given by (1- pn). In the case

of pseudo-random skips, the sampling conditions are defined by the employed sampling model.

Like in the ARS model, the sampling instants are given by Equation 2.14. Similarly in the JRS

model, the sampling instants are defined by Equation 2.13. It follows that a specific sampling

point process can be implemented by employing a particular sampling model, according to which

the sampling points are skipped.

2.3.4 Additive Pseudorandom Sampling (APRS)

This sampling scheme is employed by Bagshaw and Sarhadi, their intention was to analyze the

wideband signals, which were sampled at the sub-Nyquist rates [20]. The APRS scheme samples

the signal at different frequencies, which are chosen pseudo-randomly from a given finite set. The

choice of these set elements is application dependent. If P is the number of different possible

frequencies, then P different sampling periods are available in this case. According to [20], the

sampling model can be formally written as follows.

Saeed Mian Qaisar Grenoble INP 9

Page 28: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

1,...,2,1,0,1 &!)! & Nntt nn * (2.15)

Here, tn is the current sampling instant and tn-1 is the previous one. N is the total number of

obtained samples and can be defined as follow.

+ ,Pii ,...,2,1!- ** (2.16)

Where, takes the value j, j is the smallest integer that is greater than or equal to the result

obtained by the product between P and R. Here, R is a pseudorandom sequence generating

function, it has a large enough sequence period and uniform distribution over the range 0 < R < 1.

The minimum possible difference between two consecutive sampling instants is min and it is the

minimum one among the predefined set { i }.

In the case of APRS scheme, the sampling function sF(t) spectrum can be periodic under certain

conditions [20]. If FP is the periodicity of SF(f) then Equation 2.3 can be written as follows.

" # " # $$&

!

&&&

!

)& !!)1

0

221

0

2.

N

n

tFjftjN

n

tFfj

PFnPnnP eeeFfS

''' (2.17)

It is obvious that SF(f+FP) = SF(f) if and only if e-j2#Fptn = 1. In other words, it can be said that FP

should be calculated in such a way that for all tn the product FP.tn $ N. Here, N is the set of

natural numbers. The APRS process is shown in Figure 2.4.

xn-1

xn

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn-1 tn

1 2 2 1 1 2 2 2

xn-1

xn

Am

plitu

de A

xis

Time Axis

X(t)

xn+1

tn-1 tn

1 2 2 1 1 2 2 2

Figure 2.4. Additive pseudo random sample point process.

Figure 2.4, illustrates the APRS process for the case of P = 2. If in Equation 2.16, i is represented

by a rational number i.e. i = ai / bi, then by following this notation the desired Fp can be found

by employing the following formula [20].

+ ,+ ,i

iP

aGCD

bLCMF ! (2.18)

Saeed Mian Qaisar Grenoble INP 10

Page 29: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

Here, LCM abbreviates Least Common Multiple and GCD abbreviates Greatest Common

Divisor. In order to have an alias free spectrum analysis the input signal must be band limited to

the half of transform period i.e. fmax FP / 2.

The advantage of using APRS is the increased periodicity of SF(f). If Fmax is the maximum one in

the given set of P frequencies, then it is possible to sample a signal of fmax > Fmax, by the APRS,

without having the aliasing problem. It is valid as far as: Fmax < fmax FP / 2. The period of SF(f)

can be further increased by increasing P [20]. Authors have shown that even by employing the set

of sub-Nyquist frequencies, the alias free analysis of the sampled signal is possible with the

GDFT (General Discrete Fourier Transform). It is also shown that even if there is no spectral

aliasing, yet there always exists the wideband noise on the sampled signal spectrum, calculated by

employing the GDFT. This wideband noise limits the obtained spectrum accuracy [44].

2.3.5 Signal Crossing Sampling (SCS)

In the case of SCS, a reference signal r(t) is followed and a sample is captured every time the

input analog signal x(t) crosses r(t). Although various reference signals can be exploited, the

choice of an appropriate one is important for an effective implementation of the SCS. If r(t) is

considered as a time-invariant null function, then it will lead towards the zero crossing sampling

scheme [24].

Moreover in [8, 21, 22] a sinusoidal function is chosen as r(t). The reasons of this choice are that

the sinusoid is a narrow band, stabilized signal and can be easily generated [8]. Moreover, in the

case of reference sinusoid crossing sampling, only the samples timing information is enough for

recovering the input signal sample values x(tn). It is done by reading values of the reference

sinusoid corresponding to the sampling instants {tn}. The sampling point process based on the

sine wave crossing is shown in Figure 2.5.

xn-1xn

tn-1 tn

Am

plitu

de A

xis

0

Time Axis

X(t)

r(t)

xn-1xn

tn-1 tn

Am

plitu

de A

xis

0

Time Axis

X(t)

r(t)

Figure 2.5. Sine wave crossing sample point process.

Figure 2.5 shows that in the SCS process, the sampling instants {tn} occur at each intersection

between x(t) and r(t). Theoretically for each tn, x(tn) – r(tn) = 0. If r(t) = Ar.sin(2.#.fr.t), then for a

proper signal acquisition the conditions: fr ! (fmax / 2) and Ar ! (%x(t) / 2), have to be satisfied.

Saeed Mian Qaisar Grenoble INP 11

Page 30: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

Here, %x(t) is the input signal amplitude dynamics. By following these conditions the mean

sampling frequency always remains greater than the FNyq [21, 22]. Hence, it ensures the proper

signal reconstruction of the signal crossing sampled signal [5, 27, 28].

2.3.6 Level Crossing Sampling (LCS)

The concept of LCSS is not new and has been known at least since the 1950s [25]. It is also

known as an event based sampling [31]. In the case of LCS, a sample is captured only when the

input analog signal x(t) crosses one of the predefined thresholds. The samples are not uniformly

spaced in time because they depend on x(t) variations as it is clear from Figure 2.6.

xn-1

xn

tn-1 tn

dtn

Am

plitu

de A

xis

q

Time Axis

X(t)xn-1

xn

tn-1 tn

dtn

Am

plitu

de A

xis

q

Time Axis

X(t)

Figure 2.6. Level crossing sample point process.

The set of levels is chosen in such a way that it spans the analog signal amplitude range &x(t).

Figure 2.6 shows a possibility of equally spaced thresholds, which are separated by a quantum q.

However, thresholds can also be spaced logarithmically or with any other distribution [42].

Moreover thresholds can also be realized in a time-variant manner [8].

In the case of LCS, each sample is a couple (xn, tn) of an amplitude xn and a time tn. xn is clearly

equal to one of the levels and tn can be computed by employing Equation 2.19.

nnn dttt )! &1 (2.19)

In Equation 2.19, tn is the current sampling instant, tn-1 is the previous one and dtn is the time

elapsed between the current and the previous sampling instants. For the initialization purpose t1

and dt1 are taken as zero and later on the level crossing sampling instants {tn} are computed by

employing Equation 2.19.

Please note that both the LCS and the SCS belong to the class of sampling, which introduce

indirect non-uniformity in the sampling process [21-26]. Unlike other non-uniform sampling

schemes [10-20], we do not have an a priori knowledge of the sampling instants in this case.

Each time when the sampling process is triggered, the corresponding sampling instant is

measured [26]. In practice, a timer is employed for this purpose, which provides time stamps with

Saeed Mian Qaisar Grenoble INP 12

Page 31: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

a finite precision. Higher will be the timer resolution better will be the sampling process precision

and vice versa [26, 37, 38, 43].

2.4 Resemblance Between Different Sampling Schemes

Although the above discussed sampling schemes have significantly different features, a

resemblance can be found among them. In this section, two cases are briefly presented as an

illustration.

2.4.1 Resemblance between the Uniform Sampling and the ARS

The uniform sampling is a particular case of the random sampling; it can be shown by making a

comparison between Equations 2.5 and 2.14 respectively. According to Section 2.3.2, the

sampling process randomization can be controlled by the ratio ! / ". Hence, if in Equation 2.14,

the standard deviation ! of { n} becomes zero, then " becomes Ts in this case. Under such a

situation, the process shown by Equation 2.14, becomes a uniform sampling process.

2.4.2 Resemblance between the APRS and the LCS

As in the case of LCS, a finite resolution timer is employed for recording the sampling instants

[26, 37, 38, 43]. If Ttimer is the timer step, then in Equation 2.19, the values of dtn can be computed

by employing the following relation.

+ ,*

,...,2,1.

N

PnTTdtntimertimern

-

!!!

.

. (2.20)

Here N* is the set of non-zero natural numbers. In practice {Ttimer} is a finite set, its length

depends upon the timer resolution and the timer saturation [37, 38, 43]. Ttimer defines the timer

resolution and if P. Ttimer is the timer saturation, then ' can vary within the range: '=1, 2,…., P.

Now let us consider Equations 2.16 and 2.20 respectively. In Equation 2.15, is the sampling

time step for the APRS scheme, it can take any value from a chosen finite set of time steps

{ i}.The minimum value, which can have is the minimum among { i }. Similarly in Equation

2.20, dtn is the sampling time step for the LCS scheme, which can take any value from the set

{Ttimer}. It is clear that the minimum value, which dtn can have is Ttimer. By knowing the timer

specifications, it is possible to calculate {Ttimer} and then during the LCS process, dtn can have a

value equal to one among the pre calculated {Ttimer}.

Besides this resemblance, there always exists a major difference among both techniques. It is the

way of choosing and dtn from the given sets { i } and {Ttimer}. In the case of APRS, the choice

of from { i } mainly depends upon the pseudorandom function generator R (cf. Section 2.3.4).

However in the case of LCS, the choice of dtn from {Ttimer}, depends upon the thresholds

placement and on the input signal characteristics (cf. Section 2.3.6). This sampling instants choice

distinction results into the different characteristics of both compared sampling schemes, which

makes them attractive for different application areas [20, 25, 26, 29-36].

Saeed Mian Qaisar Grenoble INP 13

Page 32: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

2.5 The Sampling Non-Uniformity as a Tool

There exist several ways to sample an analog signal, a few of them are discussed in Section 2.3.

In fact the sampling process strongly affects performance of the post DSP (Digital Signal

Processing) chain [37-40, 43-53]. Thus, it is important to learn how to efficiently sample a signal

under the given set of conditions, so that the best results can be obtained. Among the possible

sampling techniques, some ones may turn out to be better for some applications than others [7, 8].

The classical sampling is a well developed and mature process. It almost covers the whole

existing DSP areas. The samples sequence obtained with this sampling process is well suited for

processing with the existing signal processing devices. Moreover, there also exist efficient

algorithms for processing the periodically sampled signals [1, 2].

Although the classical sampling is a universally accepted process, it is not the best one in all DSP

areas. For a targeted application in mind, a non-uniform sampling process can be a better

candidate than the classical sampling process. Indeed the sampling process non-uniformity can be

used as a tool, which makes the sampling process more flexible and adaptable for a specific

application. It makes it possible to obtain advantages, which are not attainable with the classical

sampling process [14-26, 30-40]. Some of the main motivations towards sampling non-uniformity

are to achieve antialiasing, smart data acquisition, system complexity reduction, compression,

smart data transmission, etc. Some applications of the sampling non-uniformity are discussed

below.

Shapiro and Silverman were among the first persons who have employed the randomness in the

sampling process [19]. Their intention was to achieve alias free sampling of the random noise; in

this context they proposed ARS process. Bilinski has also employed the direct sampling

randomization as an anti aliasing technique [7, 8, 32]. The idea is to sample the wideband signal

at a sub-Nyquist average frequency, without having the aliasing problem. It enables to widen the

area of digital signal processing towards high frequency applications. To demonstrate it, Bilinski

has employed ARS scheme to process the radio frequency and microwave signals [8].

The application of sampling randomization is not limited to the antialiasing effect. In [12],

Wojtiuk employed it as a technique to ameliorate the wideband radio systems performance. He

also discusses advantages of sampling randomization towards the multi-mode transceiver

architectures. Following Wojtiuk in [56], Ben-Romdhane and Desgeys have also proposed the use

of sampling randomization for the development of an intermediate frequency radio receiver.

In [11], Fontaine has employed the technique Uniform Sampling with Random Skips (cf. Section

2.3.3) as a compression method during the data storage process. This method consists of a

periodic sampling process and then storing the time-amplitude pairs of only the relevant samples.

In [20], Bagshaw has employed the interesting features of the APRS scheme for analyzing a

wideband signal, while sampling it at the sub-Nyquist intervals.

In [21], Nazario and Saloma have employed the SCS as a smart signal acquisition tool. They have

shown how the original analog signals can be recovered by just employing the timing information

of the signal crossing events. In [8], Bilinski has employed the SCS as a remote sampling

technique. Following the idea of Nazario and Saloma, Bilinski has demonstrated the

reconstruction of an Electrocardiogram (ECG) signal, by just employing the timing information

of the signal crossing events.

Saeed Mian Qaisar Grenoble INP 14

Page 33: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

Similar to the SCS, the LCS has also been employed as a beneficial tool for many applications.

Mark and Todd have employed it for the data compression purpose [26]. Their intention was to

achieve data compression directly during the analog signal sampling process; it can be done by

sampling only the relevant information in the analog signal. It has also been suggested in the

literature for random processes [57], band limited Gaussian random processes [58] and for

monitoring and control systems [34-36].

2.6 Why Level Crossing Sampling Scheme (LCSS)?

Sampling is the first step towards digitizing the analog signal. The sampling process dictates the

performance of the whole DSP system. Therefore, a smart sampling can result into a smart digital

system [8, 13, 43-52]. A key to perform efficient sampling is to choose the sampling process,

which is exactly in accordance with the input signal characteristics [8]. It is well known that most

of the real life signals like biological signals, communication signals, geological signals etc. are

time varying in nature. A possible realization of efficient sampling is to extract the input signal

time variations and then adapt the sampling process accordingly [39, 40, 44-52].

The classical sampling process does not exploit the input signal time variations. Indeed, it

samples the signal at a fixed sampling rate, which is chosen in order to respect the Shannon

sampling criterion [3]. Thus, it leads towards a large number of useless samples without any

relevant information, especially in the case of low activity sporadic signals like

electrocardiogram, phonocardiogram, seismic signals etc. It results into a useless increase of

resources utilization like increased memory space, increased transmission bandwidth, increased

power consumption etc.

The efficiency of resources utilization can be improved by smartly adapting the sampling process

according to the signal time variations. Since, characteristics of the threshold crossing sampling

process depend upon, the chosen reference function r(t) and on the input signal x(t) itself [21-26].

Therefore, it is sensitive to x(t) variations and adapts itself accordingly. Due to this quality, it is a

good candidate to be employed in the digital systems in general and in the mobile systems in

particular [8].

Among various threshold crossing sampling schemes, the ZCSS (Zero Crossing Sampling

Scheme) is the simplest one [24]. Although certain interesting applications have been developed

based upon this sampling scheme, yet it has some serious limitations. In fact in the case of ZCSS,

the signal variations above or below the zero level are not at all reflected in its sampled version.

Moreover, in order to reconstruct the signal from its zero crossings requires knowing the position

of crossings with an extremely high accuracy [37].

The LCSS is an extension of the ZCSS [29, 37]. Comparing to the ZCSS, the LCSS represents a

signal with more number of samples in a given time interval. Besides this disadvantage, a skill

full exploitation of the LCSS could provide remarkable benefits over the ZCSS. Firstly, it is

evidently more informative about the signal variations compared to the ZCSS [29]. Secondly, it

relaxes the time resolution requirements compared to the ZCSS for the signal reconstruction

accuracy [29, 37, 38].

In [39] Guan and Singer have shown that the LCSS lets the signal to dictate the sampling process.

Moreover, according to Gritans, the non-uniformity in the sampling process represents the signal

Saeed Mian Qaisar Grenoble INP 15

Page 34: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

local variations [40]. It shows that the LCSS is a natural choice for the data acquisition systems,

especially while acquiring the low activity sporadic signals. It adapts the sampling rate by

following the input signal local characteristics [13, 39, 40, 44-51]. Hence, it drastically reduces

the activity of the post processing chain, because it only captures the relevant information [13,

43-52].

Along with the interesting data acquisition characteristics, its other main features are the simple

electronic realization [37, 38, 43, 97], the low power consumption [43-52], the one bit data

representation in time [8] and the transmission of data that is insensitive to noise over relatively

long distances [8, 34, 35].

Above discussion shows the LCSS as a promising candidate for the mobile data acquisition. As

this thesis is devoted towards the development of smart mobile systems, so inspiring from its

interesting features, the LCSS is employed for the sampling purpose in the proposed approach.

2.7 Sampling Criterion and Signal Reconstruction

In order to assure the proper signal reconstruction an appropriate sampling criterion should be

respected during the sampling process. The sampling criterion is well defined in the case of

uniform sampling process [3, 9]. On the other end, yet the sampling criterion for the non-uniform

sampling process is not well mature. However, there exist remarkable efforts and contributions in

this domain [5-8, 27-30]. In particular in [27], Beutler has shown that a band limited signal can be

reconstructed from its non-uniformly spaced samples if the average sampling frequency sF

remains greater than two times of the input signal bandwidth fmax. In continuation to the Butler’s

work, Jerry in [5] and Marvasti in [28], have proved that a band limited signal can be

reconstructed from its non-uniformly spaced samples, if sF respects the Shannon sampling

criterion. This process in the case of LCSS can be represented mathematically as follow.

max

lim__

.21.2

f

dt

NNF

N

Nn

n

s

////

0

1

2222

3

4)

%5!

$&!

(2.21)

The proper reconstruction of the level crossing sampled signal can be ensured by respecting the

condition 2.21, during the sampling process.

Similar to the sampling criterion, the signal reconstruction phenomenon is also well defined in the

case of uniform sampling process (cf. Equation 2.12). On the other hand, a general signal

reconstruction process for the non-uniform sampling case is not yet available. However, in this

case, there exist specific solutions, for particular situations [5, 6, 27, 28, 54, 55].

The non-uniformly sampled signal reconstruction process can be mainly splitted into two

categories. The first one focuses on reconstructing the original analog signal by directly

employing the non-uniformly sampled sequence as input [30]. The second approach uniformly

resamples the non-uniform data by employing an appropriate interpolation technique. While

resampling, the Shannon sampling criterion is respected. Later on the resampled sequence can be

reconstructed by employing the classical process, given by Equation 2.12 [37, 38].

Saeed Mian Qaisar Grenoble INP 16

Page 35: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 2 Non-Uniform Sampling

Techniques laying in the first category require a complex mathematical formalization, which

results into an increased computational complexity. However, depending upon the employed

interpolation method for the data resampling purpose, the second approach can provide a simpler

and computationally efficient solution compare to the first one [37, 38]. In this work, the second

approach is employed with a simple interpolation technique. It results into computationally

efficient solutions, while providing appropriate quality results [44-51].

2.8 Conclusion Sampling is an elementary process in the analog signal digitization. A variety of sampling

processes exist, some famous among them have been discussed. A resemblance among different

sampling processes has also been described. The employment of sampling non-uniformity as a

tool is expressed. It is shown that depending upon the targeted application, employment of an

appropriate sampling process can result into remarkable advantages. The reasons of choosing the

LCSS in the proposed work have been argued. Moreover, the sampling criterion for a level

crossing sampled signal was also discussed.

While providing interesting results, the sampling non-uniformity can also have negative

consequences, as it can leads towards increased statistical errors. Thus, sampling non-uniformity

should be skillfully employed in order to achieve the targeted results.

Both the uniform and the non-uniform samplings have their own pros and cons. The periodic

sampling has remarkable advantages but in some specific areas, its application can be challenged

by the non-uniform one. Another view is to exploit the goodness of both by making an

appropriate combination of the uniform and the non-uniform sampling schemes. In the literature

such an approach is known as the hybrid sampling process, which is a hot research topic.

Saeed Mian Qaisar Grenoble INP 17

Page 36: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

Chapter 3

ANALOG TO DIGITAL CONVERSION

The DSP (Digital Signal Processing) has many advantages over the analog processing [1, 2, 123].

Therefore, with the recent advent of technology most of the signal processing tasks have been

transferred from the analog to the digital domain [1, 2]. The ADC (Analog to Digital Converter)

is an ubiquitous, critical component of a DSP system and it imposes a major impact on the

performance of whole system. A smart ADC can lead towards an efficient solution and vice versa

[8].

The increasing sophistications in the recent applications like software radio, sensors networks,

autonomous control, bioinformatics, etc. require out of the shelf solutions. In this context several

advances have been made in the domain of A/D conversion. A rich literature is available in this

framework; a few examples are [59-95]. The focus of this chapter is to briefly review the main

concepts of the A/D conversion. The error introduced by this transformation is discussed. Several

ADC performance parameters are described. The main features of different ADC architectures

are also presented. Moreover, the ADC performance trends in the recent years are also discussed.

3.1 The A/D Conversion

The A/D conversion is achieved by firstly discretizing x(t) in time (sampling) and then rounding

off these samples amplitudes (quantization). Here the term rounding off points towards measuring

the samples amplitudes by comparing them with the chosen reference thresholds. The

quantization process can be realized in a number of different ways, which results into different

quantized signal characteristics. Broadly speaking the quantization process can be deterministic

or randomized, depending upon the methodology of distributing the reference thresholds. For

deterministic quantization the references are kept at fixed positions, while these are randomly

varied for the randomized quantization [8].

Classically the A/D conversion is performed by employing the combination of a uniform

sampling and a uniform deterministic quantization process. Here the term uniform quantization

states that the reference thresholds are uniformly distributed, which means that the consecutive

amplitude difference between all thresholds is unique. If q is the quantization step, then the

quantization process in this case is clear from Figure 3.1.

Saeed Mian Qaisar Grenoble INP 18

Page 37: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

xn-1

xn

tn-1 tn

TS

Am

pli

tud

e A

xis

q

Time Axis

Analog Signal Quantized Signal

Qe

xn-1

xn

tn-1 tn

TS

Am

pli

tud

e A

xis

q

Time Axis

Analog Signal Quantized Signal

Qe

Figure 3.1. Uniform deterministic quantization: original and quantized signals.

Theoretically speaking during the classical A/D conversion process, the only imprecision caused

by an ideal ADC is the quantization error Qe. This error arises because the analog input signal

may take any value within the ADC amplitude dynamics while the output is a sequence of finite

precision samples [59]. The samples precision is defined by the ADC resolution in terms of bits

(cf. Figure 3.2).

ADC

Bit M (MSB)

Bit 2

Bit 1 (LSB)

Input Analog

Signal

ADC

Bit M (MSB)

Bit 2

Bit 1 (LSB)

Input Analog

Signal

Figure 3.2. M-bit resolution A/D conversion process.

If the ADC voltage ranges between [-Vmax; Vmax], then in the case of uniform deterministic

quantization the dynamic range of an M-bit resolution converter can be defined by Equation 3.1.

q

VM max.22 (3.1)

In this case, the upper bound on Qe is given by Equation 3.2 [60, 61].

2

LSBQe !" (3.2)

Where, LSB is the converter least significant bit, which is clear from the transfer function of an

ideal M-bit ADC, shown in Figure 3.3.

Saeed Mian Qaisar Grenoble INP 19

Page 38: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

Figure 3.3. Ideal M-bit ADC quantization error.

In Figure 3.3, q is the weight of an LSB in terms of voltage. The process of calculating q is

straightforward from Equation 3.1.

In [62], authors have analysed the actual spectrum of Qe. They have shown that Qe is equally

probable to occur at any sample point within the range ! ½ q. The further details on Qe can be

found in [63-67].

By assuming that Qe is uncorrelated to the input signal and it is a white noise process, Qe can be

modeled as a simple sawtooth waveform [62, 64]. By following this assumption Qe as a function

of time is given by Equation 3.3.

s

qt

s

qtstQe

.2.2,.)(

#$$

% (3.3)

Where, s is the waveform slope (cf. Figure 3.4). The MS (Mean Square) value of Qe(t) can be

calculated by employing Equation 3.4.

&%

s

q

s

q

e dttsq

stQMS

2

2

2.).())(( (3.4)

After simple integration and simplification, Equation 3.4 results into Equation 3.5.

12))((

2qtQMS e (3.5)

Finally the RMS (Root Mean Square) value of Qe(t) is given by Equation 3.6.

Saeed Mian Qaisar Grenoble INP 20

Page 39: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

12))((

qtQRMS e (3.6)

Qe(t) produces harmonics, which spread between [- ; ] Hz frequency range [63-67]. Normally,

the spectral band of interest ranges between [0; Fs / 2], here Fs is the system sampling frequency.

In literature the bandwidth [0; Fs / 2] is known as the in-band spectral width BWin [60, 61, 62].

The spectral harmonics extend well far than BWin. However, due to the spectral periodization all

the higher order harmonics are folded back into BWin and sum together to approximately produce

an RMS (Qe(t)) equal to q/!12 [60, 62, 64].

t

Qe(t)

+ q/2

- q/2

- q/2.s+ q/2.s

SLOPE (s)

t

Qe(t)

+ q/2

- q/2

- q/2.s+ q/2.s

SLOPE (s)

Figure 3.4. Quantization error as a function of time.

3.2 Performance Evaluation Parameters

In literature there exist several criteria for measuring and comparing ADCs performance. Mainly

they are splitted into two categories: static and dynamic. Some examples of both categories are

briefly reviewed in the following subsections.

3.2.1 Static Parameters

These parameters are more adequate for characterizing the low frequency application ADCs. In

the era of 1960s, mainly ADCs were employed for industrial measurements and process control.

As the targeted signals were of low frequency, so the static parameters were primarily used for

the characterization. Some examples of these parameters are the offset error, the gain error, the

DNL (Differential Nonlinearity) and the INL (Integral Nonlinearity).

Saeed Mian Qaisar Grenoble INP 21

Page 40: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

3.2.1.1 Offset Error

The offset error is also known as the zero-scale error. It indicates how well the actual transfer

function matches to the ideal one at a single point (cf. Figure 3.5). For an ideal data converter, an

input voltage of ½ q will just barely cause an output code transition. For an ADC, the zero-scale

voltage is applied to its input and is increased until the first transition occurs. This error can be

positive or negative depending upon whether the first transition point is higher or lower than the

ideal one. Offset error is usually expressed in LSB or as a percent of full-scale voltage range. This

error is constant and can be easily calibrated out by the usual design techniques.

MEASURED

TRANSFER

FUNCTION

OFFSET ERROR = +2 LSB

OUTPUT

CODE

ANALOG INPUT

IDEAL

TRANSFER

FUNCTION

MEASURED

TRANSFER

FUNCTION

OFFSET ERROR = +2 LSB

OUTPUT

CODE

ANALOG INPUT

IDEAL

TRANSFER

FUNCTION

Figure 3.5. Offset error of an ADC.

3.2.1.2 Gain Error

The gain error of an ADC indicates how well the slope of an actual transfer function matches to

the slope of the ideal one (cf. Figure3.6). Gain error is usually expressed in LSB or as a percent of

full-scale voltage range. Its calibration solutions can be found both in hardware and in software

[60].

GAIN ERROR =2.5LSB

MEASURED

TRANSFER

FUNCTION

OUTPUT

CODE

ANALOG INPUT

IDEAL

TRANSFER

FUNCTION

GAIN ERROR =2.5LSB

MEASURED

TRANSFER

FUNCTION

OUTPUT

CODE

ANALOG INPUT

IDEAL

TRANSFER

FUNCTION

Figure 3.6. Gain error of an ADC.

Saeed Mian Qaisar Grenoble INP 22

Page 41: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

3.2.1.3 DNL (Differential Nonlinearity)

For an ADC, the analog threshold levels that trigger any two successive output codes should

differ by one LSB, in such a case the DNL = 0. Any deviation from one LSB is defined as the

DNL (cf. Figure 3.7). The process can be expressed formally by Equation 3.7.

' ( qVVDNL kkk %% %1 (3.7)

Where, Vk and Vk-1 are the input signal corresponding voltages, which yield the kth and (k-1)th

digital codes. For an ideal M-bit ADC, 2M different digital codes are possible. If the DNL

exceeds 1 LSB the ADC can face the problem of missing codes. In this situation, some digital

output codes will never be effectuated. The DNL is a static specification which relates to the

SNR (Signal to Noise Ratio), a dynamic specification. However, noise performance can not be

predicted from the DNL specification, except to say that the SNR tends to become worse as the

DNL departs from zero.

DNL error=-0.5 LSB

MEASURED

TRANSFER

FUNCTION

DNL error= +1 LSB

ANALOG INPUT

DIGITALCODE IDEAL

TRANSFER

FUNCTION

DNL error=-0.5 LSB

MEASURED

TRANSFER

FUNCTION

DNL error= +1 LSB

ANALOG INPUT

DIGITALCODE IDEAL

TRANSFER

FUNCTION

Figure 3.7. DNL error of an ADC.

3.2.1.4 INL (Integral Nonlinearity)

The INL is also called as the relative accuracy. It describes the departure of an actual transfer

function from the ideal one (cf. Figure 3.8). The INL is a measure of the straightness of the

transfer function. After nullifying offset and gain errors, the straight line is either a best fit

straight line or a line drawn between the end points of the ideal transfer function [60]. The INL

can be calculated by computing the summation of DNLs up to the concerned quantization level,

thus it can be greater than the differential non-linearity. The process can be represented

mathematically as follow.

)

m

k

km DNLINL1

(3.8)

The size and distribution of the DNL errors will determine the converter integral linearity. The

INL is a static specification and relates to the THD (Total Harmonic Distortion), a dynamic

Saeed Mian Qaisar Grenoble INP 23

Page 42: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

specification. However, distortion performance can not be predicted from the INL specification,

except to say that the THD tends to become worse as INL departs from zero.

MEASURED

TRANSFER

FUNCTION

INL= + 0.5 LSB

INL= +1 LSB

ADC INPUT

DIGITALCODE IDEAL

TRANSFER

FUNCTION

MEASURED

TRANSFER

FUNCTION

INL= + 0.5 LSB

INL= +1 LSB

ADC INPUT

DIGITALCODE IDEAL

TRANSFER

FUNCTION

Figure 3.8. INL error of an ADC.

3.2.2 Dynamic Parameters

In early 1970s with the advancement of microelectronics and the DSP (Digital Signal

Processing), new specifications were required, in order to properly characterize converters for

more sophisticated applications. In this context various dynamic performance criteria were

proposed, such as the SNR (Signal to Noise Ratio), the SFDR (Spurious Free Dynamic Range),

the THD (Total Harmonics Distortion), the FOM (Figure of Merit), etc.

3.2.2.1 SNR (Signal to Noise Ratio)

The SNR compares the level of a desired signal to the level of noise. It is defined as the ratio of

RMS (Root Mean Square) value of signal amplitude to RMS value of noise amplitude. It is

usually measured in dB (decibels). Mathematically it can be represented by Equation 3.9.

**+

,--.

/

)(

)(log.20)(

NoiseRMS

SignalRMSdBSNR (3.9)

In the case of an ideal ADC, Qe is the only error which occurs during the A/D conversion process

(cf. Section 3.1). For an ideal ADC the RMS value of Qe is given by the Equation 3.6, by

replacing this value of Qe in Equation 3.9 and solving, results in Equation 3.10.

01

234

5##

max

)(log.2077.4.02.6)(

V

SignalRMSMdBSNR (3.10)

Normally, the SNR is calculated by employing a monotone FS (full scale) sinusoid at the ADC

input and by using the sequence of samples obtained at the converter output. In this case,

Equation 3.10 can be further simplified into Equation 3.11.

Saeed Mian Qaisar Grenoble INP 24

Page 43: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

76.1.02.6)( # MdBSNR (3.11)

Equation 3.11 represents the theoretical SNR of an ideal M-bit ADC in the case of a FS monotone

sinusoid as input. It is important to recall that RMS(Qe(t)) is calculated over the full BWin (cf.

Section 3.1). Thus, Equation 3.11 gives the ADC theoretical SNR over the BWin. By knowing the

SNR in dB, the ADC theoretical resolution can be calculated as follow.

02.6

76.1)( %

dBSNRM (3.12)

In addition to the quantization error the practical ADC also introduces other errors during the A/D

conversion process like the clock jitter, the comparator ambiguity, etc. [60, 61]. It causes to

reduce the ADC actual resolution compared to the theoretical one. The practically achievable

ADC resolution in terms of bits is known as its ENOB (Effective Number of Bits) and it can be

calculated by employing Equation 3.13.

02.6

76.1)( %

dBSNRENOB real (3.13)

Where, SNRreal represents the practical ADC SNR. Equation 3.13 is true for a FS monotone

sinusoidal input. A reduction in signal level will reduce the SNRreal, which will finally result in a

reduced ENOB. In this regards a correction factor is introduced in Equation 3.13, which

normalizes the ENOB value to FS regardless of the actual signal amplitude [60]. The process is

shown by Equation 3.14.

02.6

log.2076.1)( 10 **+

,--.

/#%

AmplitudeInput

AmplitudeFSdBSNR

ENOB

real

(3.14)

In [61], Walden has shown the difference between the stated resolution and the ENOB for a given

ADC (cf. Figure 3.9). It indicates the SNR degradation due to all other error sources except Qe.

He shows that for the recent ADCs on average this degradation is approximately 1.5 bits for a

given sampling rate (cf. Figure 3.9).

Figure 3.9. Summary of the ENOB of the recent ADCs.

Saeed Mian Qaisar Grenoble INP 25

Page 44: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

3.2.2.2 SFDR (Spurious Free Dynamic Range)

The SFDR is the ratio of the single-tone signal amplitude to the worst spurious amplitude

regardless of where it falls in BWin. The worst spurious may or may not be a harmonic of the

original signal. The SFDR is an important specification in communications systems because it

represents the smallest value of signal that can be distinguished from a large interfering signal

also known as blocker. SFDR can be specified with respect to full-scale (dBFS) or with respect to

the actual signal amplitude (dBc). Following [60], the definition of SFDR is clear from Figure

3.10.

FULL SCALE (F S)

Fundamental Component LEVEL

SFDR (dBc)

SFDR (dBFS)

WORST SPUR LEVEL

Amplitude

(dB)

FREQUENCYFs

2

FULL SCALE (F S)

Fundamental Component LEVEL

SFDR (dBc)

SFDR (dBFS)

WORST SPUR LEVEL

Amplitude

(dB)

FREQUENCYFs

2

Fs

2

Figure 3.10. SFDR error of an ADC.

According to Walden the effective number of bits associated with SFDR can be calculated by

employing Equation 3.15. In [61], he shows the difference between the stated resolution and the

SFDRbits for several modern ADCs (cf. Figure 3.11). Although the average difference is less than

a ½ LSB, there is more scatter in this plot than for the SNR data, shown in Figure 3.9. There are

many reasons for such a wide scattering, like the design emphasis may render the SNR more

important in some cases and the SFDR more important in others. Moreover, other factors show

that how well the design overcomes the different real life non idealities like noise, time jitter,

comparator ambiguity, transistors nonlinearity, etc.

02.6

)(dBcSFDRSFDRbits (3.15)

Saeed Mian Qaisar Grenoble INP 26

Page 45: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

Figure 3.11. Summary of the SFDRbits of the recent ADCs.

3.2.2.3 THD (Total Harmonic Distortion)

The ADCs are non linear devices. Hence, they introduce additional contents (harmonics) to the

original signal spectrum. A way of measuring the extent of that distortion is the THD. THD is

ratio of the summation of harmonics power to the fundamental component power. Normally, the

concerned harmonics are those lay in BWin. The process can be formally represented as follow.

PowerComponentlFundamenta

PowerHarmonicsTHD

) (3.16)

Another more comprehensive term used for characterizing and comparing the performance of

ADCs is the THD + N (Total Harmonic Distortion plus Noise). It is the ratio of harmonics plus

all noise components power to the fundamental component power. The statement can be

represented mathematically by Equation 3.17. Normally, the bandwidth over which harmonics

and the noise components are measured is BWin.

PowerComponentlFundamenta

NoisePowerPowerHarmonicsNTHD

) # # (3.17)

3.2.2.4 FoM (Figure of Merit)

The FoM is used subsequently to quantify the ADC performance. It emphasizes the ADC

performance with respect to the dissipated power Pdiss, the SNR and the SFDR as a function of the

input signal frequency fsig and the sampling frequency Fs. According to [61], the general

definition of the FoM is given as follow.

Saeed Mian Qaisar Grenoble INP 27

Page 46: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

diss

ENOB

P

ERBWFoM

.2.2 (3.18)

Where, ENOB is the effective number of bits (cf. Equation 3.13). Pdiss is the mean power

dissipated by the ADC in watts. ERBW abbreviates the effective resolution bandwidth. The

ERBW is measured as a frequency band between DC to the value of fsig for which the SNR

decreases to 3 dB below the low-frequency value. If ERBW " Fs / 2, then the studied ADC is a

Nyquist converter which is the design goal of many ADCs [61]. Note that the characterization of

an ADC includes the highest value of Fs for which the Nyquist operation is sustained.

In order to improve the FoM effectiveness, the ADC area S is also considered. Following it the

modified FoM can be defined by Equation 3.19.

SP

ERBWFoM

diss

ENOB

.

.2.2 (3.19)

3.3 The ADC Architectures

Though a range of sampling and quantization processes are available for the A/D conversion

process [4-8], yet most of the existing A/D conversion schemes relay on the uniform sampling

and the deterministic quantization. ADCs place in this category are known as the classical ADCs.

The classical A/D conversion is a universally accepted process and several architectures have

been suggested for its realization [68-95]. The classical ADCs can be splitted into two main

classes: the Nyquist and the oversampling converters [37, 68], which are briefly reviewed in the

following subsections.

3.3.1 Nyquist Rate ADCs

Nyquist rate converters sample the input signal at the conversion rate [37]. In order to relax the

antialiasing filter constraints, these converters usually acquire the input analog signal at a

frequency, which is 3 to 10 times of FNyq. These converters can be realized in a variety of

architectures [68-85]. Some usual architectures are briefly discussed in the following subsections.

3.3.1.1 Flash ADC

Flash analog to digital converters are also known as parallel ADCs. They are among the fastest

ways to convert an analog signal to a digital one [70-72]. They employ parallel architectures and

are realized by cascading the high speed comparators. The principle is shown in the Figure 3.12.

Saeed Mian Qaisar Grenoble INP 28

Page 47: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

-

+

-

+

-

+

2M to M

Encoder

Analog Input

(Vin)

Digital Output

R/2

R

R

R

R/2

Vmax

Vmin

Vref(2M

-2)

Vref2

Vref1

Vref(2M

-1)

-

+

-

+

-

+

2M to M

Encoder

Analog Input

(Vin)

Digital Output

R/2

R

R

R

R/2

Vmax

Vmin

Vref(2M

-2)

Vref2

Vref1

Vref(2M

-1)

Figure 3.12. Flash ADC architecture.

Figure 3.12, shows a typical flash ADC block diagram. For an M bit converter, the circuit

employs 2M-1 comparators. A resistive divider with 2M resistors provides the reference voltages.

The reference voltage for each comparator is one LSB greater than the reference voltage for the

comparator immediately below it.

When an analog input is present at the comparator bank, all comparators having reference voltage

below the input signal level will have a logic 1 at output. The comparators which have their

reference voltage above the input signal level will have a logic 0 at output. The obtained result is

referred as the thermometer code, which is supplied to an encoding block, which finally delivers

an M-bits digital output word [68].

Flash ADCs are sampling ADCs, thus usually do not require sample and hold amplifiers. It

contributes to their high operating speed. These are suitable for applications requiring very large

bandwidths. However, for M-bits resolution one needs to employ 2M-1 comparators running in

parallel. For larger M the amount of chip area and power consumption, required by the

comparators and the encoding logic becomes very large. Moreover, for a given converter

dynamic as M gets larger the LSB gets smaller (cf. Equation 3.1), which in turn requires higher

accuracy comparators with very small offsets. In such a situation, ordinary comparators can lead

towards the error named as the broken thermometer code. One can overcome this error by

employing the offset canceling comparators or by designing a sophisticated encoding logic. But

all these solutions result into an increased chip area and power consumption. With the recent

technological advents, it is now possible to implement flash ADCs up to the 12-bits of resolution.

Saeed Mian Qaisar Grenoble INP 29

Page 48: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

Following above discussion it is clear that the flash ADCs are typically suitable for low to

medium resolution and very high speed applications [70-72]. They can tackle the higher

frequency signals, which can not be addressed by other classical architecture [70-72]. Examples

include satellite communication, radar processing, sampling oscilloscopes, high-density disk

drives, etc.

3.3.1.2 SAR (Successive Approximation Register) ADC

SAR is the most popular architecture for A/D conversion realization. It is due to the reason that it

provides a considerably quick conversion time at a moderate circuit complexity. The basic SAR

architecture is shown in Figure 3.13.

Timing and

Control logic

Successive

Approximation

Register

DAC

S/H+

-

Analog Input

(Vin)

Digital Output

Timing and

Control logic

Successive

Approximation

Register

DAC

S/H+

-

Analog Input

(Vin)

Digital Output

Figure 3.13. SAR ADC architecture.

In order to process the fast signal variations, the SAR ADCs employ an S/H (Sample and Hold) at

input, to keep the signal constant during the conversion cycle [68]. At the start of a conversion,

the SAR converter sets its successive approximation register output so that all bits except the

MSB (Most Significant Bit) produce logic 0. That sets the DAC (D/A converter) output to mid of

the scale. The comparator determines whether the S/H output is greater or less than the DAC

output and in result the MSB of the conversion is stored in the successive approximation register

as a 1 or a 0. The DAC is then set either to 1Ú4 scale or 3Ú4 scale, depending on the MSB value.

Then comparator makes the decision for the second bit of the conversion. The result (1 or 0) is

stored in the register, and the process continues until all of the bit values have been determined.

The advantage of a successive approximation ADC over a flash ADC is that it requires lesser

components for a given resolution. It results into a smaller die size, a lower implementation cost

and a less power consumption compared to the same resolution flash ADC. It makes it possible

with the recent technologies to realize the SAR converters up to 16-bits resolution, which can

operate in the medium frequency regions.

On the other hand, the main drawback of the successive approximation ADCs has been the

relatively slow conversion speed compared to the flash architecture. This speed limitation can be

compensated up to a certain extent by employing the time interleaving successive approximation

algorithm. But in consequence it results into an increased implementation cost and power

consumption. The recent works [73-75] have shown that with the employment of time

interleaving the SAR ADCs can achieve an operating speed in the range of several MHz, which is

Saeed Mian Qaisar Grenoble INP 30

Page 49: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

yet enough lower than the operational region of the flash ADCs, which operate in the hundreds of

MHz regions.

It shows that the SAR ADCs are suitable for the medium speed and medium resolution

applications. Mainly they are popular for data acquisition applications, especially when multiple

channels require multiplexing inputs. Some examples are digital video, wideband radio receivers,

computer networks, etc.

3.3.1.3 Integrating ADC

Integrating ADCs are popular for providing high resolution A/D conversions, with simpler

circuitry. They are well suited for digitizing low bandwidth signals and can be realized in single

rail or multi rail modes [76, 77].

The simplest realization of an integrating ADC is the single slope architecture (cf. Figure 3.14).

Here, an unknown input voltage is integrated and its value is compared against a known reference

value. The time for integrating and tripping the comparator is proportional to the unknown

voltage (cf. Figure 3.15). In this case, the known reference voltage must be stable and accurate to

guarantee the measurement accuracy.

-

+

-

+

C

VINT

R

Vin

Vref

Digital Output

-

+

-

+

-

+

C

VINT

R

Vin

Vref

Digital Output

Figure 3.14. Single slope integrating ADC architecture.

VINT

VREF

TimeTINT

inINT VT 6Voltage

VINT

VREF

TimeTINT

inINT VT 6Voltage

Figure 3.15. Single slope integrating ADC time voltage plot.

Saeed Mian Qaisar Grenoble INP 31

Page 50: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

One drawback of this approach is that the accuracy is also dependent on the tolerances of the

integrator R and C values. Thus in a production environment, slight differences in each

component value change the conversion result and make measurement repeatability quite difficult

to attain. To overcome this sensitivity to the component values, the dual slope integrating

architecture can be employed. The key advantage of this architecture over the single slope is that

the final conversion result is insensitive to errors in the component values [68, 76, 77].

The integrating converters have low offset and gain errors along with good noise rejection and

high linearity. The conversion speed of these ADCs is quite low. For example for an M-bit dual

slope converter, the worst conversion speed occurs when Vin equals Vref. In this case 2M clock

cycles are required to integrate and similarly 2M cycles are required to disintegrate. Thus, the total

number of required clock cycles to perform a conversion is 2M+1. Thus, the higher resolution

requires the higher number of clock cycles. This tradeoff between conversion time and resolution

is inherent in this implementation. It is possible to speed up the conversion time for a given

resolution with moderate circuit changes. Unfortunately, such an improvement requires higher

accuracy components. In other words, the speed up techniques require larger budget [68].

One application that has traditionally made use of the integrating converters is the measurement

instruments such as digital multi-meters, etc.

3.3.1.4 Some Other Realizations

The A/D conversion schemes are not limited to the above discussed architectures. There are many

other possibilities like subranging [78, 79], interpolating [80, 81], algorithmic [80, 81], pipeline

[84, 85], etc. In [68], John and Martin have splitted these architectures into three main categories,

depending upon their achievable operating speed and resolution (cf. Table 3.1).

Each architecture has its own pros and cons. Therefore, selecting a proper architecture for a

particular application is a challenging task. Even though the various architectures have

specifications with a good deal of overlap, the targeted application is a key to choose the best

suited architecture among the available range.

Low to Medium Speed

High Resolution

Medium Speed

Medium Resolution

High Speed

Low to Medium Resolution

Integrating Successive Approximation

Flash

Algorithmic Subranging

Interpolating

Folding

Pipeline

Time Interleaved

Table 3.1. Different ADC architectures.

Saeed Mian Qaisar Grenoble INP 32

Page 51: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

3.3.2 Oversampling ADCs

The oversampling converters sample the input signal at a significantly higher rate than the

conversion rate [60]. Over sampling is employed to reduce the impact of quantization error in the

band of interest, which along with a post decimation process results into the improved converter

SNR. The oversampling ADCs can be splitted in two categories, the classical oversampling and

the sigma-delta ADCs. The main features of both categories are described in the following

subsections.

3.3.2.1 The Oversampling Effect on the ADC SNR

The process of sampling the signal at a frequency higher than FNyq is referred as OS (Over

Sampling). The ratio with which the signal is oversampled can be calculated by employing

Equation 3.20.

Nyq

s

F

FOSR (3.20)

Equation 3.14 shows dependency of the ADC SNR on the input signal amplitude. The SNR is

also a function of the input signal bandwidth fmax and the sampling frequency Fs [37, 60, 61].

Please recall that fmax = FNyq/2 [3]. In fact, for a similar quantization scheme and converter

resolution, the total noise power remains the same for both of the Nyquist and the oversampling

approaches [60, 62, 64]. But in the oversampling case, due to an increased number of samples

taken per unit of time, the quantization error distributes over a larger frequency range. It results

into a reduced quantization noise within the BWin. The process is clear from Figure 3.16.

f

PQe(f)

FNyq/2

0FS/2

f

PQe(f)

FNyq/2

0FS/2

Figure 3.16. Quantization Noise Power Spectral Density, filled area corresponds to the Nyquist rate

converter and the unfilled area corresponds to the oversampled converter.

Figure 3.16 shows that for the Nyquist rate sampling, all the quantization noise power occurs

across the BWin. However, in the oversampled case, the same noise power has been spreaded over

a larger bandwidth which is equal to the sampling frequency Fs. It follows that only a fraction of

the total noise power falls in the BWin. Hence, the noise outside the BWin can be strongly

attenuated with a digital low pass filter which is following the ADC. It returns into a drastic SNR

improvement [37, 60, 95]. After low pass filtering, the signal can be down sampled around the

Nyquist rate without affecting the system SNR. The collective operation of low pass filtering and

down sampling is known as the decimation.

Saeed Mian Qaisar Grenoble INP 33

Page 52: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

By assuming that Qe is uncorrelated to the input signal and it is a white noise process, Qe can be

modeled as a simple sawtooth waveform [62, 64]. By following this assumption OS results into a

processing gain, which is given by Equation 3.21.

' (OSRPGOSR log.10 (3.21)

The impact of PGOSR on the ADC SNR can be described by Equation 3.22.

OSRPGRdBSNR ## 76.1.02.6)( (3.22)

Equation 3.22 shows that for every doubling of the OSR the SNR will be improved by 3 dB,

increasing the ADC resolution by half a bit [60].

3.3.2.2 Classical Oversampling ADCs

The classical oversampling converters differ from the Nyquist rate ones mainly in two aspects:

the anti-aliasing filtering and the quantization process. In Nyquist rate converters, the anti-

aliasing has to take place before ADC thus requires sophisticated and complex analog filter. On

the other hand, due to the higher OSR of the oversampling ADCs, the anti-aliasing can be

performed in multiple stages [37]. It relaxes the requirement on the front end analog anti-aliasing

filter and the latter filtering part can be done digitally [37, 74]. The process is clear from Figure

3.17.

Anti-Aliasing

Filter

Low Pass

Filter

Down Sampler

D+

Ts

x(t) x[n]

x[n] = x(nTs)

Qe[n]

M-bits

Quantizer

xQ[n] y[n]

Discrete Time Processing

Digital Processing

y(t)

Digital

Decimator

Analog Processing

Discrete Time Processing

Anti-Aliasing

Filter

Low Pass

Filter

Down Sampler

D++

Ts

x(t) x[n]

x[n] = x(nTs)

Qe[n]

M-bits

Quantizer

xQ[n] y[n]

Discrete Time Processing

Digital Processing

y(t)

Digital

Decimator

Analog Processing

Discrete Time Processing

Figure 3.17. Classical oversampling ADC System.

In this case, the analog anti-aliasing filter can have a transition band between the input signal

bandwidth fmax and Fs/2, but it should provide very good attenuation beyond Fs/2. Nevertheless, a

price has to be paid in the digital domain, since the digital filter must attenuate the remaining

quantization noise power beyond fmax as much as possible. Another advantage of such an

arrangement is that in the process of filtering out-of-band quantization noise, any other noise

which existed in the transition band of the analog anti-aliasing filter prior to sampling will be

attenuated further.

Saeed Mian Qaisar Grenoble INP 34

Page 53: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

The higher OSR also makes to distribute the quantization noise over a large frequency range [62,

64]. Later on the samples sequence is down sampled to produce a new samples sequence, closer

to the Nyquist rate. This down sampling along with an appropriate low pass digital filtering

results into a reduced in-band quantization noise. Equation 3.22 shows that theoretically speaking

for every doubling of the OSR the SNR will be improved by 3 dB, but practically there is a limit

on the SNR improvement. It is posed by the accuracy of different circuit components with the

increased operating frequency [60, 61].

Note that in this scheme, there is a trade off between the ADC speed and resolution. The higher

resolution is obtained at the expense of requiring the oversampling, which results into an

increased system activity and so increased power consumption. Moreover, the analog circuit

complexity has also been traded for the digital circuit complexity. Thus, for a targeted application

the different ADC parameters should be tactically chosen in order to achieve an efficient solution.

3.3.2.3 Sigma-Delta ADCs

The sigma-delta ADC is an oversampling ADC, which makes the smart use of oversampling

process in order to achieve the higher effective resolution [86-91]. The major advantage of the

sigma-delta converters over the classical oversampling ones is its noise shaping ability. Due to

this feature they are also known as the noise shaping converters. The noise shaping leads towards

a drastic reduction of the in-band quantization noise compared to the counter classical converters

[75, 76].

The sigma delta converters consist of a sigma-delta modulator and a digital decimator. The

sigma-delta modulator can be implemented either in continuous or in discrete time domain [88,

92]. In Figure 3.18, a discrete time implementation of the sigma-delta modulator is presented. In

its simplest implementation the sigma-delta converter is a single bit first order modulator.

Nevertheless, multi bits and higher order modulator implementations can be realized at the

expense of an increased circuit complexity [88, 91]. The modulator section samples the input at a

higher rate and produces a binary output. The average value of the binary output tracks the analog

input. The digital decimator finds the average value by running the output of the sigma-delta

modulator through a digital low pass filter and a downsampler [86, 91].

Low Pass

Filter

Down Sampler

D+xi[n]

Qe[n]

Quantizer

xQ[n]

Discreter Time Processing

Digital Processing

x[n]

Digital

Decimator

Integrator +

DAC

e[n]

xa[n]

First Order Sigma-Delta

Modulator

y[n]Low Pass

Filter

Down Sampler

D++xi[n]

Qe[n]

Quantizer

xQ[n]

Discreter Time Processing

Digital Processing

x[n]

Digital

Decimator

Integrator ++

DAC

e[n]

xa[n]

First Order Sigma-Delta

Modulator

y[n]

Figure 3.18. First order sigma-delta ADC System.

Saeed Mian Qaisar Grenoble INP 35

Page 54: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

The oversampling reduces the quantization noise power in BWin. It is done by spreading a fixed

quantization noise power over a bandwidth much larger than BWin [62, 64]. The noise shaping or

sigma-delta modulation further attenuates this noise in BWin [86-91]. This process can be viewed

as pushing the quantization noise power from BWin to other frequencies. The modulator output

can then be low-pass filtered to attenuate the out-of-band quantization noise and finally can be

downsampled around the Nyquist rate.

Figure 3.18 shows that in this case, the signal that is quantized is not the input x[n] but a filtered

version of the difference between the input and an analog representation xa[n],of the quantized

output xQ[n]. The filter is often called as the feed forward loop filter. It is a discrete time

integrator whose transfer function in z-domain can be expressed as: z-1 /( 1- z-1). By employing

the linearized model, the quantizer is replaced with a sampled noise source Qe[n] (cf. Figure

3.18) [88]. If the DAC is ideal, it is replaced by a unity gain transfer function. Thus, the

modulator output XQ(z) is then given by Equation 3.23.

' ( )(1

1)()()( zQE

zzXQzXzXQ #

%% (3.23)

Solving and simplifying Equation 3.23 results into Equation 3.24.

)1)(()()( 11 %% %# zzQEzzXzXQ (3.24)

Equation 3.24 shows that the output is just a delayed version of the signal plus quantization noise

that has been shaped by a first order z domain differentiator or a high pass filter. The

corresponding time domain version of the modulator output is given by Equation 3.25.

7 8 7 8 7 8 7 8 7 8 7 8nMenxnQenQenxnxQ #% %%#% 111 (3.25)

Where, Me is the modulation noise, which is the first order difference of the quantization error.

Its power spectral density can be expressed as follow[37].

2

...2.1.)()( STfj

QeMe efSfS9%% (3.26)

Where, TS is the sampling period of quantizer and SQe(f) is the quantization error power spectral

density. By assuming that the quantization error is white [62, 64], the noise energy in the signal

band can be calculated as follow.

&%

% 2

2

322

.3

.12

.)(

Nyq

Nyq

F

F

MeMe OSRq

dffSP9

(3.27)

Recall that the OSR stands for the over sampling ratio. In the case of a monotone sinusoidal input

the above discussion leads towards the following SNR expression [95].

)log(.3017.576.102.6 OSRMSNRdB #%# (3.28)

Saeed Mian Qaisar Grenoble INP 36

Page 55: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

Equation 3.28 shows that each doubling of the OSR reduces the noise by 9dB and thus provides

an extra resolution of 1.5 dB. It is three times higher of the resolution gain obtained in the case of

classical oversampling converters for each OSR doubling.

Following Equations 3.24 and 3.25, the noise shaping phenomenon can be looked upon as

subtracting an estimate of the quantization noise from it [37, 95]. Hence, using a better estimate

the resolution can be further improved. It is the key of attaining the higher resolution in the case

of higher order sigma-delta converters. For example in the case of second order sigma-delta

configuration the modulation noise becomes the second difference of the quantization error and

can be given by the following Equation.

7 8 7 8 7 8 7 821.2 %#%% nQenQenQenMe (3.29)

Following it, in this case the SNR expression can be expressed by the following Equation.

)log(.5029.1276.102.6 OSRMSNRdB #%# (3.30)

It shows that for each doubling of the OSR the SNR can be reduced by 15 dB and thus provides

an extra resolution of 2.5 dB. In the similar way the analysis can be carried out for the higher

order modulators as well. A general result that relates the noise energy in BWin to the modulation

order is given below [89].

' (1222

.12

.12

#%

# K

K

Me OSRK

qP

9 (3.31)

Following it, the SNR expression in relation to the modulation order and the OSR is given below.

01

234

5 # #% 1212

2.2.

12.3log.10 KM

KdB OSRK

SNR9

(3.32)

The above discussion shows that the reduction in the in-band quantization noise becomes more

and more significant as the modulator order is increased. In fact with the increased order the

number of feed back loops increases and thus more previous error samples are involved in the

noise cancellation process. It results into a reduced overall error and an improved resolution. The

typical magnitude spectrum of Me is plotted on the Figure 3.19, showing the noise shaping

characteristics of the sigma-delta converters.

f

Me(f)

FNyq/2

0

1st Order

2nd Order

3rd Order

f

Me(f)

FNyq/2

0

1st Order

2nd Order

3rd Order

Figure 3.19. Noise spectrum of Sigma-Delta converter.

Saeed Mian Qaisar Grenoble INP 37

Page 56: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

The Sigma-delta ADCs are used almost exclusively in applications that require high-resolution

output, such as high fidelity audio or industrial measurements. To achieve the required high

resolution, the analog signal must be sampled at rates much higher than the Nyquist rate. In

general, the complexity of the circuit design is proportional to the sampling rate [60, 88]. For this

reason, sigma-delta ADCs have traditionally been used for sampling low frequency signals means

signals with relatively low Nyquist rates, such as speech in mobile phones.

For high resolution conversion, the sampling frequency must still be much greater than the signal

bandwidth, but of course, not so high as needed by the classical oversampled converters [88]. In

the sigma-delta ADCs, the hardware has to operate at the oversampled rate which results into an

increased circuit complexity. Although the circuit is running at the higher rate yet the final

achievable speed is lower, which is a penalty that is paid in the interest of attaining the higher

resolution.

The sigma-delta converters also suffer from the tone problem, which presents itself as a pattern

noise in the time domain [37]. A solution to this problem is the dithering techniques, which

randomize the noise with an external additive signal [94]. However, this solution is achieved at

the cost of an added hardware complexity and an increased noise floor.

Theoretically speaking, it is interesting to increase the modulator order in order to achieve the

higher resolution. However, in practice there are difficulties in the implementation of sigma-delta

modulators whose orders exceed than two. Mainly these higher order modulators can only be

conditionally stable. For this reason the circuits are much more sensitive to component values

compared to the low order modulators [88]. In order to avoid this problem, as an alternative

solution, several low order modulators can be cascaded in order to achieve the performance level

that is comparable to the higher order ones [88]. However, such a configuration suffers from the

residual noise, which appears due to the mismatch between different modulator stages [93]. It

looses the main attraction of this scheme, namely the insensitivity to the circuit parameter

variations. This error can also be reduced by achieving the high components matching, but

requires an increased hardware cost.

3.4 ADCs Performance

In [61, 95] authors have evaluated the performance of recently available state of the art ADCs.

The focus of this section is not to rebuild details but to emphasize the main trends in ADCs

performances during the recent years. In fact there is an inherent trade off between different ADC

parameters like speed, resolution, power consumption, area, etc. [37, 61, 95]. For some recent

ADCs the tradeoff between the conversion speed and the achievable resolution is shown in Figure

3.20.

It shows that architectures like integrating and sigma-delta achieve high resolution conversion i.e.

more than 15-bits, while operating in the low-to-medium speed regime. On the other edge the

flash architecture achieves high conversion speed i.e. more than 10 MHz, while availing the low-

to-medium resolution, i.e. 8-12 bits. All other architectures represent a compromise between the

speed and the resolution and fall into the moderate-to-high speed and resolution ranges.

Saeed Mian Qaisar Grenoble INP 38

Page 57: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

Figure 3.20. ADC architectures cover different ranges of sample rate and resolution.

In [37], Sayiner has illustrated the relationship between the resolution, the chip area and the

power consumption for a range of available ADCs (cf. Figure 3.21). He has shown that as the

desired resolution increases the power consumption and/or the chip area required to achieve this

resolution also becomes larger. In fact the higher resolution can be achieved in two ways. The

first approach is to operate at higher sampling rate like oversampling and sigma-delta ADCs.

Obviously it results into an increased power consumption. The second approach is to perform the

multi-steps or parallel conversion like subranging, pipeline and flash converters which

consequently results into a larger chip area.

6 128 10 1614 18 20

0

500

1000

3500

3000

2500

2000

1500

4000

Po

wer

X A

rea

(m

W s

q.m

m.)

Resolution (bits)

6 128 10 1614 18 20

0

500

1000

3500

3000

2500

2000

1500

4000

Po

wer

X A

rea

(m

W s

q.m

m.)

Resolution (bits)

Figure 3.21. Resolution vs. power and chip area for various ADC architectures.

In [95], Allier has studied the performance of a range of state-of-the-art ADCs in terms of the

FoM (cf. Equations 3.18 and 3.19). In this regards, he has employed several ADCs parameters

like supply voltage, circuit core area, average dissipated power, theoretical resolution, ENOB,

ERBW, etc. For the case, when ADCs area information is not available, he has employed the

Saeed Mian Qaisar Grenoble INP 39

Page 58: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 3 Analog to Digital Conversion

relationship 3.18 for computing FoMs. He has shown that FoM of the most modern ADCs lay in

the range of 1011 to 1012. In other case, when area information is available, he has calculated the

results by employing Equation 3.19. He has stated that the high performance ADCs have a FoM

greater than 1018.

3.5 Conclusion

The ADC is a fundamental component of all the DSP systems. A large range of ADCs exist,

some most famous among them have been discussed. The quantization error which occurs in the

classical ADCs has been described. The ADC performance evaluation parameters have also been

discussed. Pros and cons of several ADC architectures have been expressed. Finally the

performance trends of the state-of-the-art ADCs have been reviewed.

For the Nyquist rate converters each sample is quantized at the full converter resolution. Thus, the

resolution of such converters is limited by the technology in which they are fabricated. This

hardware resolution limitation has been addressed up to a certain extent by employing the

oversampling approaches. However, this higher accuracy conversion causes an added circuit

complexity and increased power consumption.

While providing interesting results, the classical ADCs also suffer from their time invariant

nature. In fact, they are blind to the input signal local variations and acquire it at a fixed rate,

chosen to fulfill the Nyquist sampling criterion. As most of the real life signals are time varying

in nature, so the classical converters lead towards a useless activity and power consumption. This

drawback can be overcome by employing the smart ADCs, able to sense the input signal

variations and adapt their acquisition rate accordingly. The idea is promising and it is currently

being explored by several researchers. Some valuable contributions to this approach will be

presented in Chapter 4.

Saeed Mian Qaisar Grenoble INP 40

Page 59: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

Chapter 4

LEVEL CROSSING ANALOG TO DIGITAL CONVERSION

The signal driven A/D conversion process is well suited for acquiring the non-stationary signals

[8, 37, 95]. The signal driven acquisition means that a sample is acquired only when the measured

signal fulfills the defined conditions. Often these conditions are the crossing of predetermined

reference levels. Such an arrangement is named as the level crossing A/D conversion [37, 38, 43,

95, 97, 98]. It adapts the acquisition rate by following the input signal local variations and

therefore reduces the data to be processed compared to the classical A/D converters. Thus, it

makes an efficient employment of the system resources like memory, power, transmission

bandwidth, etc [44-51, 145, 146].

The mobile applications like distributed sensor networks, hand held electronics, human body

implants, etc. have tight resource budget and necessitate smart systems [97]. In this context,

several LCSS based efficient solutions have been proposed [26, 29-31, 33-53]. The focus of this

chapter is to briefly describe the LCADCs (LCSS based A/D converters) main features. The

associated theory with the LCADCs is quite different compared to the classical ones. Therefore,

its main concepts are reviewed. The LCADC SNR (Signal to Noise Ratio) expression is derived.

The asynchronous design is a natural choice for implementing the LCADCs [43, 97, 98]. In this

context, the asynchronous circuit principle is briefly described. Some successful LCADCs

realizations are also studied. A LCADCs comparison with the oversampling converters is also

made.

4.1 The Level Crossing A/D Conversion

Almost all real life signals are time varying in nature. The spectral contents of these signals vary

with time, which is a direct consequence of the signal generation process [42].

The classical ADCs are based on the Nyquist architectures. They do not exploit the input signal

variations. Indeed, they sample the signal at a fixed rate without taking into account the intrinsic

signal nature. Moreover, they are highly constrained due to the Shannon theory especially in the

case of low activity sporadic signals like electrocardiogram, phonocardiogram, seismic signals

etc. It leads to capture and to process a large number of samples without any relevant information,

a useless increase of the system activity and the power consumption.

The power efficiency can be enhanced by smartly adapting the system processing load according

to the signal local variations [39, 40, 44-52]. In this end, the threshold crossing sampling is a

good candidate [8, 21-26]. Among various threshold crossing sampling schemes the LCSS is the

most often employed one for digitizing the low activity sporadic signals [8, 26, 29, 37, 44-52]. It

adapts the sampling rate by following the input signal local characteristics [39, 40]. Hence, it

Saeed Mian Qaisar Grenoble INP 41

Page 60: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

drastically reduces the activity of the post processing chain, because it only captures the relevant

information [44-53]. Another interest of the LCSS is the simplicity of the electronic circuits

needed for its implementation [8, 37, 95]. In its simplest realization the threshold crossing

detection requires only one comparator [8]. Inspiring from these interesting features of the LCSS,

LCADCs have been developed [37, 38, 43, 95, 97, 98]. The main features of various LCADC

implementations will be described in the following chapter parts.

4.2 Principle of The LCADC

During a classical A/D conversion process ideally the sampling instants are exactly known, where

as samples amplitudes are quantized at the ADC resolution (cf. chapter 3). This error is

characterized by the SNR (Signal to Noise Ratio) [60, 61, 69]. The theoretical SNR of an ideal

ADC can be expressed by Equation 3.10.

The A/D conversion process, which occurs in the LCADCs is dual in nature. Ideally in this case,

samples amplitudes are exactly known, while the sampling instants are quantized at the timer

resolution [37, 38, 43, 95, 97, 98].

In practice, a timer is employed for recording the sampling instants. The time quantization occurs

due to the timer finite resolution. If Ttimer is the timer step, then the time quantization t can have

any value between: 0 t < Ttimer [13, 95]. It can be formally expressed as follow.

timerTt ;0!" (4.1)

According to [37, 95], by knowing the instantaneous slope of the analog input signal, the effect of

t can be translated into the amplitude error v by using the following relation.

tdt

tdxv "" .

)(#$

%&'

() (4.2)

It shows that the time quantization will also introduce an error into the sample amplitude. By

assuming that t is uncorrelated to the input signal, it can be modeled as a white noise. Let (tn, xn)

be the time-amplitude pair of the nth level crossing sample. If tn is the time quantization occurs

for tn, then it can randomly takes a value between 0 to Ttimer (cf. Expression 4.1). Thus, the

quantized version of tn can be obtained by employing Equation 4.3.

nnn tttq "*) (4.3)

The difference between tn and its quantized version tqn is clear from Figure 4.1. If vn is the

corresponding amplitude error, then the erroneous sample amplitude can be computed by

employing the following expression.

nnn vxxq "*) (4.4)

Saeed Mian Qaisar Grenoble INP 42

Page 61: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

TTimer 2.TTimer 3.TTimer 4.TTimer …………….…..(m-1).Ttimer m.TTimer

xn-1

xn

x(t)

t

tn-1

tn

vn

vn-1

tqn-1tn-1

tqntn

TTimer 2.TTimer 3.TTimer 4.TTimer …………….…..(m-1).Ttimer m.TTimer

xn-1

xn

x(t)

t

tn-1

tn

vn

vn-1

tqn-1tn-1

tqntn

Figure 4.1. Time quantization error of the LCSS.

If we assume that in Equation 4.2, dx(t)/dt and t are two independent random variables, the

quantization noise power can be computed as follow.

+ , + ,tPdt

tdxPvP "" .

)(#$

%&'

() (4.5)

Again assuming that dx(t)/dt has zero mean and t is uniformly distributed in the interval

[0; Ttimer], results into the following.

+ , -)timerT

timer

dxxT

tP0

2..1

" (4.6)

A simple solution of the above integral results in:

+ ,3

2

timerTtP )" (4.6)

Following this, Equation 4.5 can be simplified as follow.

+ ,3

.)(

2

timerT

dt

tdxPvP #

$

%&'

()" (4.7)

By employing the relation given by Equation 3.9, the SNR in this case can be computed as follow

[37, 38, 43, 95].

+ ,timer

x

xdB T

P

PSNR log.20

.3log.10

/

.##$

%&&'

() (4.8)

Saeed Mian Qaisar Grenoble INP 43

Page 62: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

Where, Px and Px’ are the powers of x(t) and of its derivative respectively. It shows that in this

case the SNR does not depend on the number of quantization levels, but on the input signal x(t)

characteristics and the timer resolution/Period Ttimer. The first term of Equation 4.8 can be

calculated by employing the input signal statistical properties. Therefore, for a specific signal it

becomes fixed and the SNR depends only on Ttimer. Thus, for a chosen implementation of the

LCADC (a fixed number of quantization levels), the SNR can be externally tuned by controlling

Ttimer. For example, The SNR of an ideal LCADC can be improved by 6 dB for each halving of

Ttimer. It corresponds to the 1-bit increment of the effective resolution [37, 95].

Theoretically the LCADC SNR can be improved as far as it is required by reducing Ttimer. But

practically there is a limit, which is imposed by the analog block accuracy [37, 95]. In fact, the

analog blocks determine the threshold levels precision. If these levels are known with an

incertitude !a, then this error must be added to the quantization noise in Equation 4.5. It results

into the SNR degradation [37, 95].

In [43], Allier has summarized the main characteristics of both classical and level crossing A/D

conversion processes, which are quoted in Table 4.1.

Phenomenon Classical A/D Conversion Level Crossing A/D Conversion

CONVERSION TRIGGER Clock Level Crossing

AMPLITUDE Quantized Exact Value

Time Exact Value Quantized

SNR Dependency Number of Bits Timer Period

Converter Output Sample Amplitude Sample Time-Amplitude Pair

Table 4.1. Characteristics of the classical and the level crossing A/D conversion.

4.2.1 The LCADC SNR For Different Signals

Equation 4.8 can be employed to calculate the theoretical SNR of the LCADC for a specific

signal. The cases of pure sinusoid, speech and audio signals are discussed in the following

subsections.

4.2.1.1 Monotone Sinusoid

Let x(t) be a pure sinusoid of amplitude A and frequency fsig, which is expressed by Equation 4.9.

+ ,tfAtx sig ...2sin.)( /) (4.9)

In this case, the signal derivative is given as follow.

Saeed Mian Qaisar Grenoble INP 44

Page 63: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

+ ,tffAdt

tdxtx sigsig ...2cos...2.

)()(/ //)) (4.10)

Hence, Px and Px’ are given as follow.

2

2APx ) (4.11)

2

...4 222

/

AfP

sig

x

/) (4.12)

Putting these values of Px and Px’ into Equation 4.8, results into the following relation.

##$

%&&'

()

222 ...4

3log.10

Timersig

dBTf

SNR/

(4.13)

In the simplified form Equation 4.13 can be written as follow.

+ ,timersigdB TfSNR .log.2019.11 ..) (4.14)

Equation 4.14 shows that for a monotone sinusoid of given amplitude, the SNR is related to the

ratio between fsig / Ftimer. Here, Ftimer = 1 / Ttimer is the timer frequency. It means that doubling of

Ftimer halves the quantization noise per sample, which is equivalent to an increase in the effective

resolution of one bit [37, 95]. Note that the increase in Ftimer does not correspond to the increase

in the LCADC sampling frequency, but just an increase in its timer resolution. Normally, the

LCADC average sampling frequency remains much lesser than the chosen Ftimer [37, 38, 43, 95].

It shows that most of the LCADC functional blocks will be operating at a lower frequency and

only few of them will be designed to operate at Ftimer.

Let us compare it with the classical oversampling techniques, where doubling the ADC

bandwidth increases the resolution by only half a bit. Note that it is valid under the white noise

assumption [60, 61]. It shows that for the fixed number of quantization levels, obtaining an extra

bit of resolution requires quadrupling the sampling rate. This double fold requirement on the

sampling frequency is due to the lack of symmetry between the amplitude and the time resolution

in the classical oversampling techniques [99].

4.2.1.2 Signal with Constant Spectral Density

Let x(t) be an input analog signal with a constant spectral density within the signal band [-fmax/2;

fmax/2]. Then in this case Px and Px’ can be calculated by employing the following Equations.

0 0-. )) 2

2

max

22max

max.)(.)(

f

fx ffXdffXP (4.15)

Saeed Mian Qaisar Grenoble INP 45

Page 64: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

1 21 2-- .. .)345

678

345

678) 2

2

22

2

2max

max

max

max/ .)(....2.

)(f

f

f

fxdftxTFfidf

dt

tdxTFP / (4.16)

Here, TF abbreviates the Fourier transform. In its simplified form Equation 4.16 can be written as

follow.

xxPfP ...

3

1 22

max/ /) (4.17)

By employing Equations 4.15 and 4.17 the SNR relation in this case becomes as:

#####

$

%

&&&&&

'

(

#$

%&'

()

2

2

max2 .3.2

..4

3log.10

Timer

dB

Tf

SNR

/

(4.18)

In the case of LCADC for the fixed threshold levels and Ftimer the system resolution varies as a

function of the input signal frequency. For the lower frequencies within the signal bandwidth the

resolution is higher and vice versa (cf. Equation 4.14). The question arises from here is that how

to specify the system resolution for the entire signal band. The answer can be found by inspecting

the relationship between Equations 4.13 and 4.18. It is clear that Equation 4.18 is equivalent to

Equation 4.13 for fsig = fmax /(2.!3). In other words, it shows that the system resolution for a signal

with given fmax and a uniform spectral density can be specified by employing a sinusoid with

frequency fmax /(2. 3). A similar approach can be extended to other signals with given bandwidth

and spectral distribution.

4.2.1.3 Speech Signal

Typically the speech spectrum is a combination of a flat spectrum up to 500 Hz and a spectrum

with the negative slope of 10 dB/Oct between 500 Hz to 4 KHz (cf. Figure 4.2). If x2 is the signal

power then the power of signal derivative can be found as follow [37].

272 .10.28.1// xxx

P !! (4.19)

It results into the following SNR relationship.

" #TimerdB

Timer

dB TSNRT

SNR log.203.66.10.266.4

1log.10

26$$!!%%

&

'(()

*!

(4.20)

Although the speech signal has a bandwidth up to 4 KHz, most of its energy lays in the lower

frequencies. As in the case of LCADCs, the lower frequencies are represented with higher

resolution, so such converters are particularly advantageous to use with speech like signals.

Saeed Mian Qaisar Grenoble INP 46

Page 65: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

f

X(f)

x-28.5dB

500 Hz 4 KHz

-10 dB/oct

f

X(f)

x-28.5dB

500 Hz 4 KHz

-10 dB/oct

Figure 4.2. Speech Spectrum.

4.2.1.4 Audio Signal

The audio signal spectrum is the combination of a flat spectrum up to 500 Hz and a spectrum with

the negative slope of 10 dB/Oct between 500 Hz to 20 KHz (cf. Figure 4.3). If x2 is the signal

power then the power of signal derivative can be found as follow [37].

272 .10.69.1// xxx

P !! (4.21)

It results into the following SNR relationship.

" #TimerdB

Timer

dB TSNRT

SNR log.2049.72.10.775.1

1log.10

26$$!!%%

&

'(()

*!

(4.22)

Although the audio signal has a much larger bandwidth compared to the speech signal, yet for a

given signal power and Ftimer, there is not a big difference between the SNR of an audio signal

and that of a speech signal (cf. Equations 4.20 and 4.22). It is because of the reason that similar to

the speech signal, most of the audio signal energy also lays at the lower frequencies. Since the

lower frequencies are represented with a higher resolution in the LCADC, hence it results into an

over all quantization noise reduction in the case of audio signals.

f

X(f)

x-28.6 dB

500 Hz 20 KHz

-10 dB/oct

f

X(f)

x-28.6 dB

500 Hz 20 KHz

-10 dB/oct

Figure 4.3. Audio Spectrum.

Saeed Mian Qaisar Grenoble INP 47

Page 66: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

4.2.2 The Sampling Criterion and The Tracking Condition

The LCADCs deliver non-equidistantly spaced samples [37, 38, 43, 95, 97, 98]. Conditions for

proper reconstruction of the non-uniformly sampled signals have been discussed in Chapter 2 (cf.

Section 2.7). In [5, 27, 28], authors have shown that a bandlimited signal can be ideally

reconstructed from its non-uniformly spaced samples provided that there are enough number of

samples, i.e. the average number of samples satisfies the Nyquist sampling criterion. In the case

of LCADC the number of samples is directly influenced by its resolution M [37, 38, 43, 95]. For

an M-bit resolution LCADC the average sampling frequency of an input analog signal can be

calculated by exploiting its statistical characteristics. Then an appropriate value of M can be

chosen in order to respect the reconstruction criterion [5, 27, 28].

Let ! be the LCADC processing delay for one sample. Thus, for a proper signal capturing x(t)

must satisfy the tracking condition [37, 95], given by Expression 4.23.

+q

dt

tdx,

)(

(4.23)

Here, q is the LCADC quantum. If an M-bit resolution LCADC has 2M - 1 threshold/quantization

levels which are uniformly disposed according to the input signal amplitude dynamics. Then in

this case q can be defined by Equation 4.24. Please note that here M is the LCADC hardware

resolution, it means that the number of bits which are physically implemented in the circuit.

Unlike the classical ADC, the LCADC effective resolution is independent of M, where as it

depends on the x(t) characteristics and Ftimer (cf. Equation 4.8).

12

2 max

$!

M

Vq

(4.24)

Here, 2Vmax represents the LCADC amplitude dynamic. The left hand side of Expression 4.23 is

the slope of x(t). The upper bound on the slope of a band limited signal is defined by the

Bernstein’s inequality [100], given by Expression 4.25.

max).(..2

)(ftx

dt

tdx-, .

(4.25)

In Expression 4.25, fmax is the bandwidth and "x(t) is the amplitude dynamic of x(t). Thus, in order

to respect the reconstruction criterion [27, 28] and the tracking condition [37, 95], a band pass

filter with pass-band [fmin; fmax] is employed at the LCADC input. The process is illustrated on

Figure 4.4.

LCADC

Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)LCADC

Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)

Figure 4.4. Band limiting the input signal to ensure the tracking condition and the sampling criterion.

Saeed Mian Qaisar Grenoble INP 48

Page 67: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

4.3 The Level Crossing A/D Conversion Realization

In literature different realizations of LCADCs have been proposed [37, 38, 43, 95, 97, 98]. The

main aspects of these implementations will be briefly reviewed in the following subsections.

Depending upon the circuit design these implementations are mainly splitted into two categories,

the synchronous and the asynchronous approaches. The synchronous circuit principle is well

mature and common. Most of the existing digital circuits are designed and fabricated in the

synchronous way. Being an unusual concept the principle of asynchronous circuits is briefly

described in Annex-I.

4.3.1 Synchronous Level Crossing A/D Conversion

In [37, 38], Sayiner has presented a synchronous realization of the level crossing A/D conversion.

The basic idea behind this approach is to have a simple analog circuit to acquire the input signal

at a higher rate and a more complex digital signal processing block to generate a high resolution

output.

Block diagram of the proposed A/D conversion chain is shown in Figure 4.5. It consists of a front

end Level crossing ADC, which compares the band limited input analog signal with the

predefined threshold levels in order to detect the level crossings. This comparison is done at the

timer frequency Ftimer. Every time a level crossing is detected a new sample is generated. The

non-uniformly sampled signal obtained at the converter output is fed to an interpolator, which

generates uniform sample sequence at a rate Fout. Finally the uniform samples sequence is

decimated to result in the desired conversion rate, which is chosen to be the Nyquist rate [37, 38].

Level Crossing

ADCInterpolator Decimator

Band Limited

Analog Signal Non-Uniformly

Sampled Signal

Uniformly

Sampled Signal

FoutFNyq

Decimated

Signal

FTimer

Level Crossing

ADCInterpolator Decimator

Band Limited

Analog Signal Non-Uniformly

Sampled Signal

Uniformly

Sampled Signal

FoutFNyq

Decimated

Signal

FTimer

Figure 4.5. Level crossing ADC block diagram.

The accuracy of the level crossing ADC depends on the threshold levels accuracy and on the time

quantization step. The analog comparators used for the level crossing detection define the

threshold levels accuracy. The accuracy of time stamps is determined by Ftimer, higher will be

Ftimer more precise will be the time measurements and vice versa. Ftimer is limited by the speed of

analog circuitry and by the power consumption considerations.

In his proposed system Sayiner has employed a 2nd order polynomial interpolator. He has argued

that for an appropriate number of threshold levels and timer resolution the 2nd order interpolator is

enough to achieve results with an acceptable accuracy level. Fout is the rate at which the

interpolator delivers the uniformly sampled data. Fout is limited by the digital processing block as

well as by the accuracy considerations. Here the accuracy considerations points towards the

employed interpolator order. If a complex interpolation technique is employed to achieve higher

accuracy, then Fout is likely to decrease due to the increasing number of computations required.

As Fout decreases the oversampling ratio decreases, reducing the improvement in the in-band error

power (cf. Equation 3.22). It is possible to employ a complex interpolation technique without

compromising speed at the expense of increased chip area and power consumption. An alternative

solution is to employ the poly-phase structures so that the samples that would be eliminated

during the decimation process are not generated at the interpolator output, thus directly generating

the interpolated signal at FNyq.

Saeed Mian Qaisar Grenoble INP 49

Page 68: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

The decimator employed in this case is similar to the one used in the classical oversampling or

the sigma-delta converters (cf. Section 3.3.2). It is well known that the post decimation will

increase the resolution by means of filtering out the noise outside the signal band. The gain in

resolution is directly proportional to the oversampling ratio, which can be calculated by

computing the ratio between Fout and FNyq.

Sayiner has studied the proposed scheme performance for the speech application. It is earlier

mentioned that the resolution for different frequencies within a signal band varies in the case of

LCADCs. For a constant spectral density signal the system resolution can be characterized by

employing a monotone sinusoid (cf. Section 4.2.1.2). Sayiner employs the same assumption for

the speech signal and employs a single tone of 568 Hz frequency for this purpose. He designed

the system by targeting an overall SNR of 86 dB. He has shown that for Ftimer = 4.65 MHz,

OSR = 16, high precession analog blocks (threshold levels are known with an equivalent

resolution of 14 bits) and a 2nd order polynomial it is possible to achieve the SNR of 83.5 dB,

which is quite closer to the targeted value.

Targeting the speech application, Sayiner has proposed a LCADC, which is designed in the

synchronous way. He employed a single comparator, which detects the input signal changes that

are at least as large as one quantization step q. The architecture is shown in Figure 4.6.

ComparatorLevel

CrossingDetectionLogic

TimeCounter

LevelCounterDAC

Vr

Vr-q/2

Vr+q/2

6-bit

10-bit

Input

Signal

Clock

Clock

R

R

ComparatorLevel

CrossingDetectionLogic

TimeCounter

LevelCounterDAC

Vr

Vr-q/2

Vr+q/2

6-bit

10-bit

Input

Signal

Clock

Clock

R

R

Figure 4.6. Level crossing ADC architecture.

The comparator operates at Ftimer. The DAC (Digital to Analog Converter) output is continuously

adjusted so that the input signal to the comparator is always in the range [Vr - q/2; Vr + q/2], here

Vr is the center of converter dynamic range. Thus, the reference voltage of the comparator is

switched between Vr, Vr - q/2 and Vr + q/2. It enables the comparator and the detection logic to

identify a crossing of any of these three levels. When the input signal crosses the level Vr + q/2

upwards, the DAC output is decreased by q, forcing the comparator input to drop below Vr + q/2.

Similarly, if it crosses Vr - q/2 downwards, the DAC output is increased by q, bringing the

comparator input above Vr - q/2. Therefore, the levels counter register has the correct input signal

representation at any given time. The time counter also runs at Ftimer, it stamps the level crossing

time instants.

The interpolator and the decimator were not built into the proposed chip, but were carried out

externally. The electrical characteristics of the proposed level crossing ADC implementation are

summarized in Table 4.2.

Saeed Mian Qaisar Grenoble INP 50

Page 69: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

Synchronous hardware resolution 10-BITS

Level crossing ADC resolution 6-BITS

Timer 10-BITS, 2.32 MHZ

Technology 0.9 m

Power supply Vdd = 5 v

Loop delay 45 ns

Effective Number of Bits (ENOB) 8.8 bits

Power consumption 38 mw

DNL -0.4/0.5 LSB

INL -2.2/1.2 LSB

SNR 54.8 db

fsig 568 Hz

Table 4.2. Electrical Characteristics of the level crossing ADC.

4.3.2 Asynchronous Level Crossing A/D Conversion

Asynchronous design is the natural choice for implementing the level crossing A/D conversion,

as it is well adapted to its inherent signal driven properties [43, 97, 98, 101, 102, 110]. In recent

years different asynchronous level crossing A/D converters have been designed. Some successful

implementations are discussed in the following subsections [43, 97, 98].

4.3.2.1 AADC (Asynchronous Analog to Digital Converter)

In [43], Allier has presented an interesting asynchronous design of the level crossing A/D

conversion, named as the AADC. The AADC architecture is shown in Figure 4.7.

Differencequantifier

DAC

Up/downCounter

Timer

Acq.

Req.

inc

decVin

Vref

Acq.

Req. A

cq.

Req.

Vnum= {xn}

dtn

+

-

Differencequantifier

DAC

Up/downCounter

Timer

Acq.

Req.

inc

decVin

Vref

Acq.

Req. A

cq.

Req.

Vnum= {xn}

dtn

+

-

Figure 4.7. The AADC architecture.

Saeed Mian Qaisar Grenoble INP 51

Page 70: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

It is composed of a difference quantifier, a state variable modeling the inner state (an up/down

binary counter) and a DAC which process the digital signal to make it compatible with the input

voltage Vin. The converter resolution M and its dynamic range 2Vmax are known. They set the

quantization step q (cf. Equation 4.24).

The digital output Vnum is converted to Vref by the DAC, which is compatible with Vin. The analog

comparator makes a comparison between Vr and Vin. If the difference between them is greater

than ½.q, the state variable is incremented (inc=1), if it is lower than -½.q, it is decremented

(dec=1). In all other cases, nothing is happened (inc=dec=0). The output signal Vnum remains

constant, showing that there is no activity at the input.

The converter output is composed of time-amplitude pairs (xn, dtn) where xn is the digital value of

the sample amplitude. If N = 2M – 1, is the number of threshold levels within 2Vmax then

Vnum={xn} such that n N. dtn is the time elapsed since the previous captured sample xn-1, given by

the timer.

The proposed design is an asynchronous one as the key point of this converter is that information

transfer is locally managed with a bidirectional control signalling (cf. Figure 4.7). Each data

signal is associated with two control signals: a request (Req) and an acknowledgement (Acq). On

the availability of valid data, the first stage sends a request (Req.=1) to the second stage. The

second stage sends an acknowledgment (Acq.=1) to the first stage when data is processed,

indicating that it is ready to process the next data.

Allier has presented a complete design flow to calculate the different AADC parameters, for a

given application. As it is not a conventional approach, therefore it is quoted here in order to keep

the document self contained. The design flow inputs are the desired ENOB and the analog signal

properties like Power Spectral Density (PSD), bandwidth fmax, amplitude dynamics !x(t) and

probability density p(x). They make to properly choose the AADC parameters: the resolution M,

the loop delay", the timer period Ttimer and the inner quantum qin (adopted for all the analog

blocks design).

In the case of LCADCs the average sampling frequency varies as a function of M (cf. Section

4.2.2). Thus, M is chosen in order to ensure the proper reconstruction of the non-uniformly

sampled signal, obtained at the AADC output [27, 28]. According to the tracking condition (cf.

Equation 4.23) and the Bernstein inequality (cf. Equation 4.25) the loop delay " must verify: "

"max. Here, "max is defined as follow.

# $12...2

1

max

max %&

Mf'"

(4.26)

There exists a tradeoff between " and the chosen current or voltage quantum qin, implemented for

the analog blocks. When "max is fixed, qin can be tuned in order to make " to reach its upper

bound. Moreover, qin must insures that all the analog blocks have the desired ENOB precision.

Then, Monte-Carlo simulations can be used to study the influence of the process variations on the

implemented circuit and determine the lower value of qin respecting the fixed maximum loop

delay "max.

Saeed Mian Qaisar Grenoble INP 52

Page 71: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

Equation 4.8 shows that the LCADC SNR can be controlled by varying Ttimer. For a chosen Ttimer

the obtained SNR can be employed in the following relation to compute the ENOB. Hence, for a

desired ENOB the appropriate value of Ttimer can be calculated.

02.6

76.1%& dBSNR

ENOB (4.27)

The design flow methodology is summed up in Figure 4.8.

Analog signal

PSD, fmax

, !x(t), p(x)Targeted ENOB

Reconstruction

condition

MAnalog

considerations

Bernstein theorem

)(...2)(

max txfdt

tdx!( '

Classical SNR 76.1.02.6 )& ENOBSNR dB

LCADC SNR

**+

,--.

/)**

+

,

--

.

/&

Timerx

xdB

TP

PSNR

1log.20

.3log.10

/

"max

qin

Ttimer

p(x)

fmax

, ! x(t)

PSD

Quantization

Levels

Accuracy

Tracking condition

"

q

dt

tdx(

)(

Analog signal

PSD, fmax

, !x(t), p(x)Targeted ENOB

Reconstruction

condition

MAnalog

considerations

Bernstein theorem

)(...2)(

max txfdt

tdx!( '

Classical SNR 76.1.02.6 )& ENOBSNR dB

LCADC SNR

**+

,--.

/)**

+

,

--

.

/&

Timerx

xdB

TP

PSNR

1log.20

.3log.10

/

"max

qin

Ttimer

p(x)

fmax

, ! x(t)

PSD

Quantization

Levels

Accuracy

Tracking condition

"

q

dt

tdx(

)(

Figure 4.8. The AADC design flow for a targeted application.

Allier has employed the AADC for acquiring a speech signal. x(t) is band limited up to 4 KHz. Its

amplitude dynamics !x(t) is set to match 5% to 95% of the AADC amplitude range 2Vmax. The

SNR relation for the speech signal is already discussed (cf. Equation 4.20). Allier has argued that

for M 0 4, the average sampling frequency is 8.2 KHz [43]. Hence, the reconstruction criterion is

full filled (cf. Section 4.2.2). Therefore, M = 4 is chosen. The tracking condition (cf. Equation

4.23) gives the maximum loop delay: "max=2,65 1s. An ENOB higher than 8 is targeted. The

inner quantum qin is a current quantum which is set to qin=3,2 1A. An 18-bit timer up to 36 MHz

frequency is employed. The electrical characteristics of the implemented circuit for these chosen

parameters are summarized in Table 4.3.

For these electrical characteristics, Allier has calculated the AADC FoM (figure of merit) by

using Equation 3.19. He has shown that for the studied case, the AADC FoM is increased by one

order of magnitude compared to the high performance classical ADCs. It is shown that in this

design, the signal statistical characteristics exploitation leads to a low value of M, while achieving

a higher ENOB. The circuit hardware complexity is much lower than in the classical case. Hence,

Saeed Mian Qaisar Grenoble INP 53

Page 72: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

together with the silicon area reduction, such a LCADC can significantly decreases the electrical

activity, the power consumption and the electromagnetic emissions [43].

Hardware resolution M = 4-bits

Timer 18-bits, Ftimer up to 36-MHz

ENOB 8 to 12-bit

Technology 0.181m CMOS

Power supply Vdd = 1.8V

Current quantum qin = 3.21A

Loop delay " = 93ns

AADC bandwidth fmax = 114kHz

AADC total static power consumption Pmin = 0.898mW

Pmax = 1.603mW

AADC total dynamic power consumption Pmean = 1.716mW

@ AADC maximum speed

Timer consumption Ptimer = 0.015mW

(for ENOB = 10-bit)

Analog area Sanalog = 2201m2681m

Digital area Sdigital = 1601m2801m

Table 4.3. Electrical Characteristics of the AADC.

4.3.2.2 LCF-ADC (Level Crossing Flash A/D Converter)

Akopyan has presented an asynchronous flash type design of the level crossing A/D conversion,

named as the LCF-ADC [97]. The architecture of a two-bit LCF-ADC is presented in Figure 4.9.

Trigger

Trigger

Trigger

Trigger

Token

MERGE

Trigger & StatisizerComparator

Asynchronous

Processing

ElementV_in

Vdd

c

c

c

c/2

c/2

Vss

TriggerTrigger

TriggerTrigger

TriggerTrigger

TriggerTrigger

Token

MERGE

Trigger & StatisizerComparator

Asynchronous

Processing

ElementV_in

Vdd

c

c

c

c/2

c/2

Vss

Figure 4.9. A two-bit LCF-ADC architecture.

A capacitive divider is used to set the reference voltages. The analog comparators detect whether

the input signal is above or below the set threshold. Once a level is crossed by the input signal,

Saeed Mian Qaisar Grenoble INP 54

Page 73: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

the comparator changes its output. If the signal was above and crossed down, the output of the

comparator changes from high to low and vice versa. As soon as the digital trigger identifies the

change in the comparator output, it checks the states of the previous and next asynchronous

processing elements and if all the variables indicate that the crossing has occurred and no

conditions were violated, the trigger sends a request to the asynchronous processing element. As

soon as the request becomes active, the asynchronous element checks whether it has the

permission to output the value corresponding to the crossing, and as soon as the permission is

granted, it outputs a 0 or a 1. If the level was crossed by the input signal from below, the value 1

is outputted and if the input signal crossed the level from above, the value 0 is outputted. Mutual

exclusion between asynchronous elements is maintained by using a token based scheme,

implemented in the asynchronous digital part. The each level crossing direction is transmitted by

the asynchronous element to the output merge, which combines all its exclusive inputs into a

single output channel.

During the period of inactivity in the input signal, none of the levels are crossed and thus the

outputs of analog comparators remain constant. The triggers discussed in the previous section do

not send any requests to the asynchronous processing elements. The asynchronous elements stay

in the idle mode until the level corresponding to this element is crossed. The only power that is

consumed by the asynchronous elements is the leakage power. In order to minimize the power

consumption of the analog comparators, the following approach is employed. Since the input

signal is always between two levels, the comparators corresponding to those two levels are the

enabled ones. Thus at any moment, the power is supplied to only two comparators. It is very

different from a conventional flash ADC, where all the comparators must be turned on during the

A/D conversion process.

In the case of LCF-ADC the upper bound on the input signal bandwidth is posed by the

maximum throughput of the asynchronous circuitry Omax [99], the relation is given below.

max

sin.. O

Cycle

gscroslevelofNoBW (

(4.29)

Akopyan has simulated a 4-bit LCF-ADC in TSMC 0.18 m process. The transistors were sized

for the minimum power consumption. The system performance is studied by varying the signal

bandwidth up to 5 MHz. The simulation results are summarized in Table 4.4.

BW Power Consumption ( w)

1 KHZ 34.41

100 KHZ 42.48

114 KHz 43.57

160 KHz 46.84

1 MHz 114.14

5 MHz 437.81

Table 4.4. LCF-ADC simulation data.

Saeed Mian Qaisar Grenoble INP 55

Page 74: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

A comparison of the LCF-ADC is made with the AADC. It is argued that for the same input

bandwidth 114 KHz the LCF-ADC power consumption is lower than the AADC (cf. Tables 4.3,

4.4). The LCF-ADC is designed by following the flash topology, where as the AADC design is

based on the successive approximation topology. Taking the advantage of flash topology the

LCF-ADC can achieve up to 5 MHz bandwidth, which is much higher than that of the AADC i.e.

114 kHz [43]. This higher bandwidth of the LCF-ADC is achieved at the cost of an increased

number of comparators and chip area compared to the AADC, required for a targeted effective

resolution.

4.3.2.3 Microprocessor Based Level Crossing A/D Converter

Baums has presented a general purpose P (microprocessor) based system for the level crossing

A/D conversion [98]. The proposed system is presented below. Figure 4.10 shows a level-

crossing sampling unit, includes a comparator and a source of reference levels. Two level-

crossing sampling units are employed in the data acquisition system. One unit is tracking changes

for the rising signal slope and the other is tracking changes for the falling signal slope [98].

In Figure 4.10, the source of reference levels is a DAC and the connection of signals to the

comparators inputs represents a level crossing sampling unit for the rising input signal slope. The

connection for the falling input signal slope are reversed.

+

Source of

reference levels

Vref

x(t)

Output+

Source of

reference levels

Vref

x(t)

Output

Figure 4.10. Level crossing sampling unit structure.

A complete system architecture, comprises of a microprocessor, a timer and two level crossing

sampling units is shown by Figure 4.11.

The P controls a level crossing sampling and processes sampled data. P is connected with both

units via data and control buses. The following tasks are performed: crossing, setting of reference

levels, and time measurement. The crossing includes the processing of the comparator output

status. The setting of reference levels includes the reference level calculation and loading of

calculated values in DAC of both units. The time measurement includes the reading of timer

values and the saving them for further processing.

The execution of any task takes certain time. Let Tc, Tr and Tm be the crossing, the reference

level setting and the measurement time periods respectively. Expressing time periods with a

number of P cycles will give the total the number of P cycles Kp, which are necessary to

perform the level crossing A/D conversion.

MRCK p !(4.28)

Saeed Mian Qaisar Grenoble INP 56

Page 75: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

The implementation of asynchronous data conversion system using single P, proved that the

architecture depicted in Figure 4.11 has limited performance, resulting in the higher value of Kp.

Level-crossing

Sampling Unit

1

2

Timer

x(t)

µP

Level-crossing

Sampling Unit

1

2

Timer

Level-crossing

Sampling Unit

1

2

Timer

x(t)

µP

Figure 4.11. Architecture of the level crossing A/D converter using one P.

Kp can be reduced by performing the tasks in parallel. In this context a system with two Ps is

employed (cf. Figure 4.12).

x(t) Level-crossing

Sampling Unit

1

2

Timer

µP1

µP2

x(t) Level-crossing

Sampling Unit

1

2

Timer

Level-crossing

Sampling Unit

1

2

Timer

µP1

µP2

Figure 4.12. Architecture of the level crossing A/D converter using two P.

P1 is performing the task of crossing and setting reference levels. P2 is performing the task of

time measurement. In this case Kp can be calculated as follow.

Saeed Mian Qaisar Grenoble INP 57

Page 76: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

" # "$ %MCRCK p ! ,max #(4.29)

In order to achieve the power efficiency the P power control (power scheduling and dynamic

voltage scaling) feature is employed. Thus the P can have one of the three states: running

waiting or sleeping. The delay overheads because of these stats transitions are considered [98].

The system is implemented by employing the TI (Texas Instruments) low-power MSP430 micro-

controllers. Their 16-bits RISC processors with 16 MHz clock are able to provide reasonable Kp

value. MSP430 micro-controller has advanced power control with 3 energy saving modes. The

prototype system throughput is analyzed with the relationship between the number of threshold

levels N, the input signal bandwidth fmax and the maximum loop delay max max. max determines

the time interval available to prepare for the next sampling cycle. It is calculated as follow.

max

max...2

1

fN&' !

(4.30)

max can be replaced with the number of P cycles Kc by employing the following relation [98].

FKC .max'! (4.31)

Where, F is the frequency of P clock. The proposed system can properly perform the level

crossing conversion if the input signal frequency fsig and max satisfies the following condition.

CP KK (

(4.32)

The proposed system is able to process an input signal with bandwidth fmax = 8 kHz, for N = 7.

Limiting factors of the system performance are values of the maximum loop delay Kc and the

time delay for switching from the power-down mode to the running mode.

Usage of a general-purpose P in the system brings more flexibility in the implementation of the

LCSS based data processing systems. At the same time such an approach is limiting the converter

hardware resolution (number of threshold levels) and the input signal bandwidth, due to the

higher values of Kc and time delay for switching between the P states.

4.4 LCADCs Comparison with Other Oversampling

Converters

For an appropriate choice of M the level crossing ADCs locally oversample the relevant signal

parts with respect to their local bandwidths [44-52, 145, 146]. Hence, they can also be

categorized as the oversampling converters [37].

A key of comparing the LCADCs with other oversampling ADCs is to notice that how much

resolution improvement is achieved with the extra available bandwidth. In the classical case the

Saeed Mian Qaisar Grenoble INP 58

Page 77: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

doubling of OSR will increase the resolution by half a bit. On the other hand in the case of

Sigma-Delta converters the resolution is improved by K + ½ bits (K is the modulator order) for

each doubling of the OSR (cf. Section 3.2.2). In the case of LCADCs, for each doubling of the

timer frequency Ftimer the resolution is improved by one bit (cf. Equation 4.8).

The resolution improvement is better in the case of Sigma-Delta converters because of their noise

shaping feature. The noise shaping requires a feed-back loop in Sigma-Delta converters and

brings instability problems in the design. In the higher order modulators the resolution

improvement is much higher but at the same time the stability problem is much crucial [86-91].

In the cascade Sigma-Delta modulators the transfer function of the stages have to match

precisely, otherwise it results in the residual noise [89]. These problems with the Sigma-Delta

modulators cause stringent requirements on the analog circuitry, in terms of component matching

and sensitivity to the circuit parameters variations [86, 88].

In the LCADCs [38, 43, 97, 98] there is no noise shaping, as the previous samples and errors do

not have the effect on the next sample. In this case, the analog circuitry consists of comparators

and of perhaps buffers [38, 43, 98, 99]. The converter performance is less sensitive to the

component matching and the circuit parameters variations compared to the Sigme-Delta case

[37]. Moreover, it should be noted that there is not a direct correspondence between the OSR in

the Sigma-Delta converters and the Ftimer in the LCADCs. When the OSR in the Sigma-Delta

converter is doubled its all blocks (modulator + digital decimator) do operate at the double rate.

In the case of LCADCs [38, 43, 97] the doubling of Ftimer only affects the digital timer, while the

other blocks can continue on the previous rate. It results into a significant difference between two

architectures in terms of the power consumption versus OSR and effective resolution trade-off. In

Sigma-Delta converters, to increase OSR by a factor, calls for an increase in the power

consumption by approximately the same factor. However, the same can be achieved with a

smaller factor of increase in the power consumption by using the LCADCs [38, 43].

4.5 Conclusion

The LCADCs are good candidates for the mobile applications like distributed sensor networks,

hand held electronics, human body implants, etc. They out performs the classical ADCs in terms

of power consumption, electromagnetic emission, processing noise, circuit complexity and area

[38, 43, 97, 98]. The level crossing A/D conversion principle has been reviewed. The LCADCs

performance can be determined in terms of the achieved ENOB. In this context, the LCADCs

SNR expression has been developed. The asynchronous implementation well exploits the level

crossing A/D conversion interesting features. The asynchronous circuits main concepts have been

reviewed. Different LCADCs realizations have been studied. Finally the LCADCs performance

comparison with other oversampling converters has been made.

For a given hardware resolution and timer frequency the level crossing A/D converters can adapt

their bandwidth in accordance with the input signal frequency variations. Off course this

bandwidth adaptation is not unlimited and is achievable within the predefined frequency range.

This interesting feature of the LCADCs can be employed for developing the smart adaptive rate

digital signal processing systems. The idea will be described in the following parts of thesis.

The level crossing A/D conversion can lead towards some major benefits during the integrated

circuit implementation. It is shown that the number of quantization levels required to achieve the

Saeed Mian Qaisar Grenoble INP 59

Page 78: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-I Chapter 4 Level Crossing Analog to Digital Conversion

targeted ENOB is lower in the case of LCADCs compared to the classical ones. It translates to

significantly reduce the chip area and the power consumption of the LCADCs compared to the

counter classical implementations [38, 43, 97].

Another significant advantage of the level crossing A/D conversion scheme is its configurability.

By changing the timer frequency, one can obtain different resolutions from a single integrated

circuit. This facilitates the same hardware to be used in different applications, where the input

signal frequency band and the power consumption requirements vary.

The LCADCs can present a problem for DC or very small amplitude variations, which are less

than q. As in these cases no level crossing occurs so no sample is captured. One solution to this

problem is that if no sample occurs during a time-out period Tout then a sample is generated

artificially. It can be done by employing a step dither slightly greater than q or by simply

repeating the last sample amplitude after each Tout. In other words in this situation the LCADC

sampling rate becomes unique and is equal to 1 / Tout. Hence, it makes the LCADC resolution

similar to that of a classical ADC with the same number of quantization levels and the sampling

frequency Fs = 1 / Tout.

The studied LCADCs realizations employ a uniform distribution of the quantization levels [38,

43, 97, 98]. The non-uniformly spaced quantization levels could be advantageous while dealing

with the large dynamic range signals with non-uniform probability distribution. Moreover it

might be interesting to adapt the quantization levels by following the input signal local variations.

For example, it would be desired to have more quantization levels in the region where the

probability of finding the signal becomes larger. One possibility to do it is to device a metric that

would incorporate most recent statistics of the input signal and vary the quantization levels

accordingly. The actual mean of efficiently developing such a metric is an area of future research.

Saeed Mian Qaisar Grenoble INP 60

Page 79: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

Chapter 5

ACTIVITY SELECTION

The contribution of this thesis starts from this chapter. As previously discussed, the motivation of

this work is to enhance the signal processing tools, in order to achieve efficient mobile systems.

In previous chapters it is argued that due to its signal driven nature, the LCSS is a good choice for

mobile applications [26, 29-31, 33-37]. In this context, LCADCs are employed [37, 38, 43, 97,

98] for the data acquisition, in the proposed case. The data obtained with the LCADC is non-

uniformly spaced in time; therefore it can not be processed or analyzed by employing the

classical techniques [1, 2].

The windowing is a basic operation required for the finite time data acquisition in order to meet

the practical system implementation requirements [1, 2, 114]. The process of windowing the level

crossing sampled signal is not mature in the existing literature. In this chapter two novel

techniques for windowing the relevant (active) parts of the non-uniformly sampled signal,

obtained with the LCADC are presented. Following their activity selection feature they are named

as the activity selection algorithms [44-51]. The appealing features of the proposed techniques are

described. The proposed techniques performances compared to the classical windowing process

are also depicted with the help of a study example.

5.1 The Windowing Process

Practically the digital signal processing and analysis have to be performed on a finite set of data.

It is because of the fact that a real world system has limited resources like memory, processing

speed, etc. This resource limitation poses the upper bound on the length of data set, means the

maximum number of samples which the system can process at once [1, 2, 114]. In order to

capture a finite frame of data the time windowing functions are employed [1, 2, 115]. The process

is depicted in Figure 5.1.

ADC

Filtered

Analog Signal

x(t)

Uniformly

Sampled Signal

(xn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)Window

Function

(wn)

Windowed Signal

(xwn)

ADC

Filtered

Analog Signal

x(t)

Uniformly

Sampled Signal

(xn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)Window

Function

(wn)

Windowed Signal

(xwn)

Figure 5.1. The sequence of a classical windowing process.

The windowed version of a sampled signal xn is obtained by picking an N samples segment

centred on , the process is clear from Equation 5.1.

Saeed Mian Qaisar Grenoble INP 61

Page 80: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

!

"#

"#

2

2

.

L

Ln

nnn wxxw

$

$

$ (5.1)

Here, xwn is the windowed version of xn. L is the effective length in seconds and is the central

time of the window function wn. If Fs is the sampling frequency then N can be computed as

follow.

SFLN .# (5.2)

The sampling frequency and the window function length remain constant in the classical case.

Therefore, it would lead to a fixed number of samples for all windowed segments.

In the case of LCADC the sampling frequency is not fixed and it is piloted by the input signal

itself [39, 44-51]. According to the Bernstein inequality for fixed amplitude dynamics a high

frequency signal part has a higher slope and vice versa [100]. For a fixed resolution, the LCADC

adapts its sampling frequency in proportion to the input signal slope. A higher slope signal part

will be sampled at a higher rate and vice versa [44-51]. The phenomenon is depicted on Figure

5.2.

8.9 9 9.1 9.2 9.3 9.4 9.5 9.6 9.7

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Time Axis

Am

pli

tud

e A

xis

Non-Uniformly Sampled Signal Obtained with The LCADC

Figure 5.2. Non-uniformly sampled signal obtained with the LCADC.

For a defined amplitude dynamics !x(t), the input signal maximum slope occurs for a monotone

sinusoid of frequency fmax (cf. Equation 4.25). Here, fmax is the input signal bandwidth. The

process is mathematically expressed as follow.

max

max

).(..2)(

ftxdt

tdx%# (5.3)

In the proposed case x(t) is adapted to match the LCADC amplitude range 2Vmax. It allows

availing the LCADC complete resolution. Hence a chosen LCADC resolution M induces its

Saeed Mian Qaisar Grenoble INP 62

Page 81: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

maximum and minimum sampling frequencies, defined by Equations 5.4 and 5.5 respectively

[45- 47].

)12.(.2 maxmax !" MfFs (5.4)

)12.(.2 minmin !" MfFs (5.5)

Where, fmax and fmin are the x(t) bandwidth and fundamental frequencies respectively. Fsmax and

Fsmin are the LCADC maximum and minimum sampling frequencies respectively. The non-

uniformly sampled signal obtained with the LCADC can be used directly for further non-uniform

digital processing [13, 40, 52, 53]. Such an approach is sufficient for the experimental objectives,

while dealing with a finite length of recorded data. However, the practical system realization

necessitates the finite time partitioning of the acquired data [1, 2, 114]. For this purpose two

methods are proposed for windowing the LCADC output.

The principle is to employ the non-uniformity of the sampling process, which yields information

on the signal local features, to select only the relevant signal parts. Furthermore, the

characteristics of each selected signal part are analyzed in order to extract its local parameters.

These extracted parameters can be employed later on to adapt the proposed system parameters

and activity accordingly, which lead towards an efficient solution compared to the classical

approach [44-47, 50]. This selection and local features extraction process is named as the activity

selection. The sequence of activity selection process is shown by Figure 5.3.

LCADC

Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)Activity

Selection

(win)

Selected Signal

(xS,tS)

Local Parameters

LCADC

Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)Activity

Selection

(win)

Selected Signal

(xS,tS)

Local Parameters

Figure 5.3. The sequence of an activity selection process.

The proposed solutions principle is to employ the time distances between the consecutive

sampling instants, which can be defined as follow [26].

1!!" nnn ttdt (5.6)

Here tn is the current sampling instant, tn-1 is the previous one and dtn is the time delay between

the current and the previous sampling instants. From Figure 5.2 it can be noticed that variable dtn

is a function of the input signal time variations. For a high slope signal part the values of dtn will

be smaller and vice versa. By employing the values of dtn , two novel activity selection algorithms

are devised. Details of the proposed algorithms are given in the following subsections.

5.1.1 ASA (Activity Selection Algorithm)

The ASA selects the relevant parts of the non-uniformly acquired data obtained with the LCADC.

This selection process corresponds to an adaptive length rectangular windowing. It defines a

series of selected windows within the whole signal length. The ability of activity selection is

extremely important to reduce the proposed system processing activity and consequently its

power consumption [44-51]. Indeed, in the proposed case, no processing is performed during idle

Saeed Mian Qaisar Grenoble INP 63

Page 82: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

signal parts, which is one of the reasons of the achieved computational gain compared to the

classical case. The ASA is defined as follow.

While (dtn T0/2 and Li Lref)

Li = L

i + dtn;

Ni = N

i + 1;

end

Here, dtn is given by Equation 5.6. T0 = 1/fmin is the fundamental period of the bandlimited signal

x(t). T0 and dtn detect parts of the non-uniformly sampled signal with activity. If the measured

time delay dtn is greater than T0/2, then x(t) is considered to be idle. The condition dtn ! T0/2 is

chosen to ensure the Nyquist sampling criterion for fmin.

Lref is the reference window length in seconds. Its choice depends on the input signal

characteristics and the system resources. The upper bound on Lref is posed by the maximum

number of samples that the system can treat at once. Where as the lower bound on Lref is posed by

the condition Lref T0, which should be respected in order to achieve a proper spectral

representation [44, 115, 119].

Li represents the length in seconds of the ith selected window Wi. Lref poses the upper bound on Li.

Ni represents the number of non-uniform samples laying in Wi, which lays on the jth active part of

the non-uniformly sampled signal. Here, i and j both belong to the set of natural numbers *. The

jth signal activity can be longer than Lref. In this case, it will be splitted into more than one

selected windows [44-47, 50, 51].

The above described loop repeats for each selected window, which occurs during the observation

length of x(t). Every time before starting the next loop, i is incremented and Ni & Ti are initialized

to zero.

The maximum number of samples Nmax, which can take place within a chosen Lref can be

calculated by employing the following relation.

maxmax .FsLN ref" (5.7)

5.1.2 EASA (Enhanced Activity Selection Algorithm)

The EASA is a modified version of the ASA [48, 49]. The main difference between the ASA and

the EASA is the choice of the upper bound on the selected window length. For the ASA the time

length in seconds and for the EASA the number of samples is chosen as the upper bound. The

EASA is defined as follow.

While (dtn T0/2 and N

i Nref)

Ni = N

i + 1;

end

Saeed Mian Qaisar Grenoble INP 64

Page 83: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

Similar to the ASA in this case the active parts of the non-uniformly sampled signal are again

detected by employing T0 and dtn. Ni represents the number of non-uniform samples which lay in

the ith selected window Wi. Nref poses the upper bound on Ni. The choice of Nref depends on the

x(t) characteristics and on the system parameters [48, 49].

The above described loop repeats for each selected window, which occurs during the observation

length of x(t). Every time before starting the next loop, i is incremented and Ni is initialized to

zero.

If Li is the length in seconds of Wi. Then for a proper spectral representation, the condition Li T0

should be respected [44, 115- 117]. In order to satisfy this condition for the worst case, which

occurs for Fsmax (cf. Equation 5.4), Nref is calculated for an appropriate chosen window length LC.

LC has to satisfy the condition: LC " T0. The process of calculating Nref is given by Equation 5.8.

max.FsLN Cref " (5.8)

Please note that similar to Lref in the case of ASA, the choice of LC has the same constraints, the

input signal fundamental period T0 and the system resources. LC posed an implicit bound on Nref

(cf. Equation 5.8). If LC varies within the range [Lmin = T0; Lmax], then the minimum and maximum

possible values of Nref can be computed by placing the corresponding values of LC in Equation

5.8.

For Nref, given by Equation 5.8, the condition Li T0 holds for all selected windows except for the

case when the actual length of the jth activity is less than T0. Moreover, for the case when the jth

signal activity is longer than LC, it will be splitted into more than one selected windows.

5.2 Features of the ASA and the EASA

The ASA and the EASA commonly display some interesting features which are not available in

the classical windowing process [44-51]. One of them is the extraction of sampling frequency

value for each selected window [44, 48]. Since for a fixed M the LCADC sampling frequency is

correlated to x(t) local variations. Therefore each selected window obtained with the ASA or the

EASA can have a specific sampling frequency [44-51]. Let Fsi represents the average sampling

frequency for Wi, then it can be extracted by employing the following Equations.

iii ttL minmax !" (5.9)

i

ii

L

NFs " (5.10)

Where, tmaxi and tmini are the final and the initial times of Wi. It is clear that Fsi can be specific

for each selected window, depending upon Li and the slope of x(t) part laying within this window

[44, 48]. Please note that the upper and the lower bounds on Fsi are posed by Fsmax and Fsmin

respectively.

The other common features of the ASA and the EASA are that they can select only the relevant

parts of the level crossing sampled signal. Moreover, they can also correlate the selected window

length to the signal activity laying in it [44-51].

Saeed Mian Qaisar Grenoble INP 65

Page 84: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

For illustrating these features an input signal shown on the top part of Figure 5.4 is employed. Its

total duration is 30 seconds and it consists of three active parts. The summary of x(t) activities is

given in Table 5.1.

Activity Signal Component Length (Sec)

1st 0.9.sin(2.pi.30.t) 3

2nd 0.9.sin(2.pi.100.t) 0.5

3rd

0.9.sin(2.pi.500.t) 1.6

Table 5.1. Summary of the input signal activities.

Table 5.1 shows that x(t) is band limited between 30 to 500 Hz. Hence T0 = 1/30 seconds,

in this case. In this example x(t) is sampled by employing a 3-bit resolution LCADC.

Thus, Fsmax and Fsmin become 7 kHz and 0.42 kHz respectively (cf. Equations 5.4, 5.5).

2Vmax = 1.8 V is chosen, thus by following the relation q = (2.Vmax) / (2M

-1), the

quantum of 0.2571 V is obtained in this case.

Figure 5.4. Input signal (top), selected signal obtained with the ASA (middle) and the selected signal

obtained with the EASA (bottom).

The ASA is first applied in order to select the non-uniformly sampled signal obtained at the

LCADC output. In this example the reference window length Lref = 1 second is chosen, it satisfies

the boundary conditions discussed in Section 5.1.1. The given Lref delivers Nmax = 7000 samples in

this case (cf. Equation 5.7). The ASA delivers six selected windows for the whole x(t) span of 30

seconds. First three selected windows correspond to the first activity, the fourth corresponds to

Saeed Mian Qaisar Grenoble INP 66

Page 85: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

the second and the remaining two correspond to the third activity. The selected signal obtained in

this case is shown on the middle part of Figure 5.4. The first three and the last two selected

windows are not distinguishable on the middle part of Figure 5.4, because they lay

consecutively on the 1st and the 3

rd activities respectively. In order to make them

distinguishable the lines are drawn on the boundaries of each selected window. The

summary of selected windows parameters is given in Table 5.2.

Selected Window Li (Sec) Ni (Samples) Fsi (kHZ)

W1 1.00 422 0.42

W2 0.99 421 0.42

W3

0.99 417 0.42

W4

0.49 700 1.4

W5

1.00 7002 7.0

W6

0.59 4198 7.0

Table 5.2. Summary of the selected windows parameters obtained with the ASA.

Secondly, the EASA is applied for selecting the non-uniformly sampled signal obtained with the

LCADC output. For this example the value of Nref =4096 is chosen, which leads to LC = 0.6

seconds (cf. Equation 5.8). Hence it satisfies the boundary conditions discussed in Section 5.1.2.

The EASA delivers the 5 selected windows at its output. First two selected windows correspond

to the first two activities and the remaining corresponds to the third activity. The EASA output is

shown on the bottom part of Figure 5.4. Again in this case, the last three selected windows are

not distinguishable on the bottom part of Figure 5.4, because they lay consecutively on

the third activity. In order to visualize their location the lines are drawn on the boundaries

of each selected window. The selected windows parameters are summarized in Table 5.3.

Selected Window Li (Sec) Ni (Samples) Fsi (kHZ)

W1 2.99 1260 0.42

W2 0.49 700 1.4

W3 0.58 4098 7.0

W4

0.58 4097 7.0

W5

0.43 3005 7.0

Table 5.3. Summary of the selected windows parameters obtained with the EASA.

Saeed Mian Qaisar Grenoble INP 67

Page 86: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

Tables 5.2 and 5.3 represent the interesting features of the ASA and the EASA. Li represents their

dynamic feature, which is to correlate the selected window length with the signal activity laying

in it. Fsi represents that how the sampling frequency of each selected window is adapted by

following the local variations of x(t). This smart feature of sampling frequency adaptation is

achieved due to the join benefits of the LCADC and the activity selection. Ni shows that for the

chosen LCADC resolution the relevant signal parts are locally over-sampled in time with respect

to their local bandwidths. This oversampling of the relevant signal parts can be employed later on

to gain in terms of the system processing quality [44-51].

These results have to be compared with the corresponding classical case. In the classical case, the

windowing process is not able to select only the interesting parts of the sampled signal.

Furthermore, the window length remains static and is not able to adapt according to the input

signal activity laying in it. For this studied example, if the window length is chosen equal to 1

second then it would lead to thirty windows of 1-second for the whole x(t) span. Moreover, as the

input signal is bandlimited up to 500 Hz (cf. Table 5.1). Thus if Fs = 1.5 kHz is chosen in the

classical case in order to fulfill the Nyquist sampling criterion, then the total x(t) span will be

sampled at 1.5 kHz, regardless of x(t) time variations. This time invariant nature of the classical

sampling and windowing operations will cause the system to process more than the relevant

information part in x(t).

The above discussion describes the smart features, of the ASA and the EASA. One major

distinction among them is the way in which they respond to the input signal frequency contents

variations. The ASA is not sensitive to the jth activity frequency contents. The phenomenon is

especially clear for the case when the length of jth activity is larger than Lref. Such a situation

occurs for the 1st and the 3rd input signal activities, which represent the lower and the higher

frequency parts of x(t) respectively (cf. Table 5.1) The ASA has treated the both activities in a

similar way and has splitted them into m selected windows accordingly (cf. Table 5.2). Here, m is

given as follow.

##

$

%

&&

'

("

ref

th

L

lengthactivityjCeilm (5.11)

Here, the Ceil function rounds the result to the nearest integer towards infinity. On the other hand,

the EASA adapts the selected window length by following the input signal frequency contents

variations. The process is clear by again considering the 1st and the 3rd input signal activities

along with the results, summarized in Table 5.3. The EASA provides larger length selected

windows for the lower frequency components and vice versa. This frequency sensitivity feature

of the EASA is very interesting especially for the applications like time-frequency analysis,

instantaneous frequency estimation, etc. Based upon the EASA a smart adaptive resolution time-

frequency analysis technique is devised. Its complete description will be given in Chapter 8.

5.3 Conclusion

The windowing is a necessary operation for a real system implementation. Two novel techniques

have been presented for smartly windowing the non-uniformly sampled signal obtained with the

LCADCs. Combined with the LCSS, the proposed activity selection methods attain some

interesting results which are not achievable with the classical windowing operations. These

Saeed Mian Qaisar Grenoble INP 68

Page 87: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 5 Activity Selection

interesting features of the proposed techniques have been demonstrated by employing an

illustrative example. It is shown that a tactful employment of these techniques can lead towards

efficient solutions compared to the classical approach [44-51]. This point will be further

elaborated in the upcoming thesis chapters.

The windowing process can cause truncation (discontinuity) into the captured data frame. This

data truncation can result into the spectral leakage error [115-117]. The proposed activity

selection techniques can provide an efficient reduction of the spectral leakage phenomenon. This

smart feature of the proposed approach will be described in the next chapter, dealing with the

spectral analysis of the non-uniformly sampled signals.

Saeed Mian Qaisar Grenoble INP 69

Page 88: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

Chapter 6

SPECTRAL ANALYSIS OF THE NON-UNIFORMLY SAMPLED

SIGNAL

Analyzing the processed data, obtained at the system output is a key for its performance

characterization [60-62]. The frequency domain transformation is frequently employed for this

purpose [41, 116]. In this context, the spectral analysis concept is briefly reviewed. The

phenomenon of spectral leakage is also described.

The tendency towards the non-uniform sampling is increasing in many recent applications [8, 10-

13, 21-23, 28-53]. The domain of analyzing the non-uniformly sampled signals is evolving, a lot

of value able contributions have been presented in this regard and a few examples are [16, 20, 28,

119, 120, 124-126]. The GDFT (General Discrete Fourier Transform) and the Lomb’s algorithm

are the most commonly employed tools for analyzing the non-uniformly sampled signals [20,

119]. Their main features are briefly described.

The smart mobile systems are the focus of this thesis work. Being a signal driven process the

level crossing sampling is favourable for the mobile application [26, 29-53]. In this context, the

signal acquisition is performed by employing the level crossing ADCs in the proposed solutions

[44-51, 145, 146]. In order to properly analyze the level crossing sampled signal an efficient

solution is proposed. It is achieved by smartly combining features of both the non-uniform and

the uniform signal processing tools. A detailed description of the proposed approach is given. Its

smart features are illustrated with the help of an example. Its comparison with the GDFT, the

Lomb’s algorithm and the classical approach, in terms of the spectral quality and the

computational complexity is also made. Moreover, the proposed technique processing error is

also quantified.

6.1 Spectral Analysis

Spectral analysis is also known as the frequency domain analysis and is a basic tool employed in

various fields for extracting the input data details. These details can be employed later on for:

characterizing a system performance, analyzing a patient health, finding the modulation

characteristics of a communication signal, etc.

Usually time to frequency domain transformation is accomplished by applying the FT (Fourier

transform). According to Fourier’s theory any time domain electrical phenomenon is made up of

one or more sine waves of appropriate frequency, amplitude and phase [123]. In fact the FT

decomposes a time series into a spectrum of different sinusoids that we can evaluate

independantly. The process can be formally expressed as follow.

Saeed Mian Qaisar Grenoble INP 70

Page 89: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

! "#

#$

$% dtetxfX tfj ..)( ...2. (6.1)

Equation 6.1 shows that the FT is applied on a continuous time signal with a limit of integration

set to infinity. On the ideal FT plot the signal components will appear as impulses of appropriate

frequencies and amplitudes [41, 115, 116].

The real world systems have two limitations. Firstly they can not process the continuous data and

second they can not handle the infinite integrals. Therefore, the sampling and the windowing

operations are involved in a practical spectral analyzer.

A sampled signal xs(t) can be expressed by the following Equation (cf. Chapter 2).

! " ! " ! "tStxtx Fs .# (6.2)

Where, SF(t) is the sampling function. In the classical case SF(t) is defined as follow.

! " ! "$%

%&#

&#n

sF nTtts ' (6.3)

Here, Ts is the sampling period, which remains unique in the classical case [3]. Following

Equation 6.2, in this case xs(t) can be expressed as follow.

! " ! " ! "$%

%&#

&#n

ss nTttxtx '. (6.4)

The sampled signal is then windowed in order to achieve a finite time length data set. The process

of windowing xs(t) in order to achieve its windowed version xwn is detailed in chapter 5. Finally

the frequency domain transformation is achieved by computing a discrete version of the FT

named as the DFT (Discrete Fourier Transform) of xwn [1, 2, 115, 116]. The process can be

formally presented as follow.

! " $#

&#N

n

nfj

n exwN

fX1

..2...1

(6.5)

It is well known that sampling in time domain causes a periodization in the frequency domain [1,

2, 123]. As The DFT is applied on a sampled version of x(t) so it results in the periodic spectrum.

Therefore, peaks of the signal components appear on spectrum at fundamental frequencies f0 and

at frequencies Fp ± f0. Here, Fp is the spectrum periodic frequency and it is equal to the sampling

frequency Fs. If Fs is less than two times of the maximum frequency component fmax of the

incoming signal, then the obtained spectrum will be suffered from an aliasing problem [1, 2, 41,

115-117, 123]. It shows that xs(t) contains complete information about all spectral components in

an analog signal x(t) up to the Nyquist frequency and scrambled or aliased information about any

signal components at frequencies higher than the Nyquist rate [3, 20, 41, 115, 116]. The sampling

theorem thus defines both the attractiveness and the limitation of analysing an evenly spaced data

Saeed Mian Qaisar Grenoble INP 71

Page 90: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

set. A common error which occurs on the DFT plot is the spectral leakage [115-117]. The spectral

leakage phenomenon is described in the following section.

6.1.1 Spectral Leakage

The Phenomenon of spectral leakage occurs when the DFT is applied on a finite length captured

frame of the incoming signal, which consists of a fractional number of cycles [41, 115-117]. In

fact the DFT assumes that the windowed data set is one complete period of a periodic signal. For

the DFT both the time and the frequency domains are circular topologies. Thus, the two end

points of the time waveform are interpreted as though they are connected together [116]. Because

of its finite time length the windowing can result into the input signal fractional number of cycles.

Hence, the DFT assumption becomes false in this case and therefore it delivers the spectrum

which depicts a signal with different characteristics than that of the original one [115-117].

Leakage results into the signal energy smearing out over a wide frequency range on the DFT plot

when it should be in a narrow frequency range. This phenomenon translates into the false

amplitude and frequency values of the signal components on the obtained spectrum. The amount

of spectral leakage depends upon the amount of truncation (discontinuity) between two edges of

the windowed waveform. The larger is the truncation the larger is the leakage and vice versa. This

phenomenon is further illustrated by Figure 6.1, taken from the National Instruments tutorial

[117].

Figure 6.1. Capturing integral number of cycles (top-left), capturing fractional number of cycles (bottom-

left), spectrum of the first captured block (top-right) and spectrum of the second captured block (bottom-

right).

In order to minimize this effect usually an appropriate cosine (smoothening) window function is

employed on the measured signal in the time domain [1, 2, 115]. A cosine window is shaped in

such a way that it is exactly zero at beginning and end of the data frame and has a smooth shape

in between. This function is when multiplied with the captured data frame, makes the waveform

Saeed Mian Qaisar Grenoble INP 72

Page 91: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

endpoints set to zero. Hence, it reduces the discontinuity between the truncated waveform edges,

which results into the spectral leakage reduction. While reducing the spectral leakage the

windowing also alters the amplitude information of the signal spectrum peaks. This effect is

normally compensated by appropriately weighting the windowed segment [115, 116, 123]. The

process of smoothing the truncation effect is depicted by employing a cosine window function on

the discontinuous data set, shown on the bottom-left part of Figure 6.1. The resultant signal and

its corresponding spectrum are shown on Figure 6.2.

Figure 6.2. Smooth signal obtained with the cosine window function (left) and spectrum of the cosine

windowed signal (right).

6.1.2 Spectral Analysis of the Non-Uniformly Sampled Signal

Some times there are situations when the analysed data set is not evenly spaced. This non-

uniformity may occur naturally in the sampling process. A common case is where an instrumental

drop-out occurs, so that data is obtained only on a non-consecutive integral subset of Equation

6.4. Another case occurs in the observational sciences like astronomy and is that the observer

cannot completely control the time of the observations but have to simply accept a certain

dictated set of nTs. Moreover, some times the non-uniformity is delibrately introduced into the

sampling process. It leads to achieve certain advantages, which are not attainable with the

classical sampling process [14-26, 29-40, 42-53].

Standard methods of estimating the spectrum of non-uniformly sampled signals require over

sampling at uniform intervals and placement of useless samples. Techniques like the GDFT

(General Discrete Fourier Transform), the Lomb’s algorithm etc. have also been developed [20,

119]. They are able to perform frequency domain analysis directly on the non-uniformly sampled

signals. At first glance it seems attractive that by using these tools one can radically reduce the

computational requirements for the signal analysis process. It can be done by avoiding the

insertion of useless sample points, as is required in the standard methods. But these tools suffer

from a problem of spectral noise [20, 44]. Before proceeding further let us first briefly describe

the algorithms used behind the GDFT and the Lomb’s periodogra.

6.1.2.1 GDFT (General Discrete Fourier Transform)

The GDFT is simply an extension of the DFT which can be employed for spectral analysis of the

non-uniformly sampled signals [20]. It is defined by the following Equation.

Saeed Mian Qaisar Grenoble INP 73

Page 92: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

! " $#

&#N

n

tfj

nnexw

NfX

1

...2...

1 (6.6)

Here, xwn and tn respectively represent the amplitude and the time instant of the nth non-uniform

sample, which is laying in the under consideration windowed segment. In [20], Bagshaw has

employed the GDFT along with the APRSS (Additive Pseudo Random Sampling Scheme) for

analysing the wideband signals. He has shown that in the case of additive pseudo randomly

sampled signal, the spectrum obtained with the GDFT can be periodic under certain conditions. If

FP represents the GDFT periodic frequency then Equation 6.6 can be written as follow.

! " ! " $$&

#

&&&

#

(& ##(1

0

221

0

2....

1 N

n

tFjftj

n

N

n

tFfj

nPnPnnP eexwexw

NFfX

(6.7)

It is obvious that X(f+FP) = X(f) if and only if e-j2 Fptn = 1. In other words it can be said that FP

should be calculated in such a way that for all tn the product FP.tn ! N. Here, N is the set of

natural numbers. Depending upon the input signal characteristics the set of possible sampling

periods should be chosen in such a way that they lead towards the desired FP (cf. Section 2.3.4).

According to [20], in order to have an alias free spectrum analysis, the input signal must be band

limited to the half of FP. Formally the condition is expressed as follow.

2max

PFf ) (6.8)

Bagshaw has argued that while analysing the pseudo randomly sampled signal with the GDFT,

the alias do not appear in the form of frequency shifted replicas of the original signal but spread

in the form of broadband noise whose level is not constant. He has also shown that this noise

level can be reduced by increasing the effective window length in seconds (cf. Equation 5.2).

In order to explore these ideas a simulation is performed. In this case, the input signal x(t) is a

combination of two sinusoids both of amplitude 0.5 v and frequencies 20 Hz and 80 Hz

respectively. This signal is sampled by employing the APRSS (cf. Section 2.3.4). In order to keep

the case simple P = 2, "1 = 1/100 seconds and "2 = 1/125 seconds are chosen. In this case FP

becomes 500 Hz (cf. Equation 2.18). The simulation results are plotted on Figure 6.3.

Figure 6.3 demonstrates one major advantage of the APRSS, which is to achieve the increased

spectrum periodicity while sampling at the lower frequencies [20]. On other hand, in the case of

classical sampling to achieve alias free spectral analysis the sampling frequency FS must be

greater than or equal to two times of the input signal bandwidth fmax [3]. For this example fmax =

80 Hz. Therefore even if x(t) is sampled at 125 Hz (maximum frequency for the APRSS case), the

Nyquist sampling criterion is violated. Hence, aliasing is expected on the spectrum. This

theoretical conclusion is shown on the top-left part of Figure 6.3, where 80 Hz component is

aliased into the in-band spectrum. Recall that the in-band spectrum lays between [0; FP/2] Hz. For

this example, in the classical case, FP = FS = 125 Hz and in the APRSS case, FP = 500 Hz. For

the ease of observation the corresponding upper edges of the in-band spectra are marked as

dashed lines on different parts of Figure 6.3.

Saeed Mian Qaisar Grenoble INP 74

Page 93: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

Figure 6.3. Spectra obtained with the GDFT. The case of uniform sampling (top-left), the case of APRSS

and 125 samples (top-right), the case of APRSS and 375 samples (bottom-left) and the case of APRSS and

750 samples (bottom-right).

Note that while employing the GDFT for analyzing the uniformly sampled signal it specifies

itself as the DFT. In this case, tn in Equation 6.6 becomes n.Ts, which is n. (1 / 125) seconds for

this example. Except top-left part of Figure 6.3 the remaining parts concern with the APRSS case.

One can observe the spectrum periodization phenomenon on these parts. They show the input

sinusoids fundamental and periodic peaks at f0 Hz and at 500 ± f0 Hz respectively. Figure 6.3

shows that in the case of APRSS, the aliasing is not direct but it spreads in the form of a

wideband noise on the obtained spectra. The level of this noise is not constant and it varies for

different lengths of the observed data sets. For larger data sets the noise level is lower and vice

versa. In order to have the wideband noise level tends towards zero, it is required to increase the

window function length tends towards infinity, which is practically impossible [1, 2, 114].

Therefore, even for reasonably larger window lengths there will always be a certain level of

spectral noise. It can be dangerous especially for the low amplitude signal components because in

such a situation the noise can result into the total signal loss.

6.1.2.2 Lomb’s Algorithm

In 1976 Lomb has presented a new method for spectral analysis of the non-uniformly sampled

signals [119]. This work was based in part on earlier works presented in 1963 by Barning [124]

and in 1971 by Vanicek [125]. Afterwards this work was additionally elaborated in 1982 by

Scargle [126]. The algorithm used behind Lomb’s periodogram is described as follow. If N is the

total number of samples in the windowed data set, then their mean and variance can be computed

by employing the following equations.

Saeed Mian Qaisar Grenoble INP 75

Page 94: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

$&

#

#1

0

_

.1 N

n

nxN

x (6.9)

$&

#

*+

,-.

/ &#1

0

2_

2 .1 N

n

n xxN

0 (6.10)

Now the Lomb’s periodogram is defined as follow.

! "! "

! "

! "

! "111

2

111

3

4

111

5

111

6

7

&

889

:

;;<

=&*

+

,-.

/ &

(&

889

:

;;<

=&*

+

,-.

/ &

#

$

$

$

$&

#

&

#

&

#

&

#

1

0

2

21

0

_

1

0

2

21

0

_

2

sin

sin

cos

cos

.2

1N

n

n

n

N

n

n

N

n

n

n

N

n

n

t

txx

t

txx

P

>?

>?

>?

>?

0? (6.11)

Here, # = 2. .f and the parameter " is an offset, proposed by Lomb which is defined as follow.

112

113

4

115

116

7

#

$

$&

#

&

#1

0

1

0

2cos

2sin

arctan.2

1N

n

n

N

n

n

t

t

?

?

?> (6.12)

Lomb has shown that this choice of " makes P (#) completely independent of shifting all tns by

any constant i.e. from one windowed segment to another. Moreover, it makes the case of

computing P(#) identical to the case of estimating the harmonic contents of a data set at a given

frequency # by linear least-square fitting to the model [119]. It gives some insight that why the

method can give results superior to the DFT based methods. In fact it weights the data on a per

point basis instead of on a per time interval basis, while non-uniform sampling can render the

latter into serious error.

Mostly the spectrum obtained with the Lomb’s periodogram is the sum of a periodic signal

components and independent wideband noise. The question is that how to determine the presence

or absence of such a periodic component? One quantitative answer is to determine that how

significant is a peak in the spectrum P (#)? From a number of investigated examples, Lomb has

concluded that even with a low SNR, there is a fairly large probability that the highest peak in the

function P(#) corresponds to the sinusoidal component frequency in the signal. Thus, if there is a

periodic component with frequency # p, then the periodogram P(#) will be large near # p. It will

be an indication of the existence of a periodic signal and can be used for its detection.

In order to explore the performance of Lomb’s algorithm a simulation is performed. In this case,

again the same input signal x(t) and the APRSS parameters are employed as are employed in the

case of GDFT (cf. Section 6.1.2.2). The Lomb’s algorithm is applied on 125 data points and the

result is plotted on Figure 6.4.

Saeed Mian Qaisar Grenoble INP 76

Page 95: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

Figure 6.4. Power spectrum obtained with the Lomb’s periodogram, the case of APRSS and 125 samples

along with the highest analyzed frequency of 650 Hz.

Figure 6.4 shows the input signal peaks at the fundamental frequencies 20 Hz and 80 Hz and at

the periodic frequencies 500 ± 20 Hz and 500 ± 80 Hz respectively. Similar to the case of GDFT

the wideband aliasing noise also exist in this case.

While comparing this spectrum with the one shown on the top-right part of Figure 6.3, it is

evident that the noise on the spectrum, obtained with the GDFT is of higher amplitude. In the

GDFT case, the highest noise peak is about 50% of the input signal peaks. Whereas, in the case of

Lomb’s algorithm the highest noise peak is about 25% of the input signal peaks. It follows that

the noise level on the spectrum, obtained with the Lomb’s algorithm is about one half of that

obtained with the GDFT.

It shows that for a same non-uniform data set the Lomb’s Periodogram provides better results

compared to the GDFT. The reason behind is that while estimating the harmonic contents of a

data set the Lomb’s algorithm weights the data on a per point basis while the GDFT weights it on

a per time interval basis [119, 126]. Such an approach of GDFT results into an increased spectral

noise while dealing with the non-uniformly sampled signals. However the lomb’s algorithm

achieves this higher performance on the GDFT paying enough in terms of the computational

complexity (cf. Equations 6.6, 6.11).

6.2 Spectral Analysis of the Level Crossing Sampled Signal

The spectral analysis of the level crossing sampled signal can also be performed by employing

the GDFT or the Lomb’s algorithm [20, 119, 120, 126]. The main hitch in the employment of

these techniques is that they suffer from a problem of wideband spectral noise (cf. Figures 6.3 and

6.4). In fact, this noise causes erroneous results, which is undesired in all cases and is especially

dangerous for the low amplitude signal components [44].

Saeed Mian Qaisar Grenoble INP 77

Page 96: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

In literature techniques exist which are specifically dealing with the spectral analysis of the level

crossing sampled signals, two interesting examples are [40, 53]. In [53], Aeschlimann has

presented a method based upon transforming the level crossing sampled signal from digital to

analog domain and than performing the FT on it. Hence, this approach is not applicable in the

digital analyzers. The technique presented by Greitans, computes the level crossing sampled

signal spectrum in the digital domain [40]. It is mainly a combination of the short time Fourier

transform and the Wigner-Ville distribution. This technique provides interesting results but

requires an extensive computational load.

In the attempt to achieve an efficient spectral analysis of the level crossing sampled signals, a

new technique is proposed [44]. It digitally processes the level crossing sampled signal and

provides appropriate quality results at lower processing load. Details of the proposed technique

are given in the following.

6.2.1 Spectral Analysis Based on Activity Selection and Resampling

The proposed spectral analysis technique mainly consists of three steps. The first step is to select

the relevant (active) parts of the level crossing sampled signal. It is achieved by employing the

ASA (Activity Selection Algorithm) (cf. Chapter 5). The second step is to uniformly resample the

data laying in each selected window. It can be done by employing an appropriate interpolation

method; its choice is application dependent. Finally the third step is to compute the FFT (Fast

Fourier Transform) of each resampled block in order to obtain its spectrum. Block diagram of the

proposed system is shown in Figure 6.5.

AADC EASA

Selected Signal

(xs,ts)

Sampling Frequency

(Fsi)X(fi)

Comparator

LCADC

Band-Limited

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

ASA Resampler

FFT

Uniformly

Sampled Signal

(xrn)

ComparatorReference Sampling Frequency

(F )

Resampling Frequency

(Frsi)

ref

Wn

Local Parameters for

(Wi)

1

0

Window SelectorReference Parameters

Window Decision

(Di)

AADC EASA

Selected Signal

(xs,ts)

Sampling Frequency

(Fsi)X(fi)

Comparator

LCADC

Band-Limited

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

ASA Resampler

FFT

Uniformly

Sampled Signal

(xrn)

ComparatorReference Sampling Frequency

(F )

Resampling Frequency

(Frsi)

ref

Wn

Local Parameters for

(Wi)

1

0

Window SelectorReference Parameters

Window Decision

(Di)

Figure 6.5. Block diagram of the proposed spectral analysis system. ‘____’ represents the signal flow, ‘……’

represents the window shape decision flow and ‘-----’ represents the parameters flow at system different

stages.

Figure 6.5 shows that in the proposed case, the level crossing sampled signal obtained at the

LCADC output is selected by employing the ASA. Characteristics of each selected part are

analyzed and are employed later on to adapt the proposed system parameters like the resampling

frequency and the window function shape accordingly. The details of this realization are given in

the following subsections.

Saeed Mian Qaisar Grenoble INP 78

Page 97: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

6.2.1.1 Adaptive Rate Sampling

Each selected window obtained at the ASA output can have a specific sampling frequency (cf.

Section 5.2). If for the ith selected window Wi the sampling frequency is FSi, then it can be

computed by employing the following Equations.

iii ttL minmax &# (6.13)

i

ii

L

NFs # (6.14)

Here, Li is the Wi length in seconds and is computed as a difference between the final tmaxi and

the initial tmini time stamps of Wi. Ni is the number of samples laying in Wi.

In the proposed system a reference sampling frequency Fref is chosen such as it remains greater

than and closest to the Nyquist sampling frequency FNyq. The condition is formally expressed as

follow.

max.2 fFF Nyqref #@ (6.15)

Here, fmax is the input signal bandwidth. The selected signal is resampled uniformly before

proceeding towards further processing. The resampling frequency Frsi for Wi is chosen depending

upon the values of Fsi and Fref. Once the resampling is done, there are Nri samples in Wi. Choice

of Frsi is crucial and its selection procedure is detailed as follow.

For the case, Fsi > Fref, Frsi is chosen as: Frsi

= Fref. It is done in order to resample the selected

data, lays in Wi closer to the Nyquist frequency. It reduces the proposed system computational

load in two ways. Firstly, by avoiding unnecessary interpolations during the data resampling

process and secondly, by avoiding the spectral computation of unnecessary samples [44].

For the case, Fsi $ Fref, Frsi is chosen as: Frsi= Fsi. In this case, it appears that the data laying in

Wi may be resampled at a frequency, which is less than the Nyquist frequency of x(t). According

to [37, 45, 46], if the local signal amplitude %x(t) is of the order of the maximal range 2Vmax, then

for a suitable choice of the LCADC resolution M (application dependent) the signal crosses

enough consecutive thresholds. Therefore, it is locally oversampled in time with respect to its

local bandwidth. Hence, there is no aliasing problem when the low frequency signal parts are

locally oversampled with respect to their local bandwidths. This statement is further illustrated

with the results summarized in Table 6.2.

Due to the resampling process there will be an additional error. Nevertheless, prior to this

transformation, one can take advantage of the inherent over-sampling of the relevant signal parts

in the system [44-51]. Hence, it adds to the accuracy of the post resampling process [37, 38]. The

NNRI (nearest neighbour resampling interpolation) is employed for the data resampling purpose.

The NNRI algorithm and reasons of its choice are detailed as follow.

Saeed Mian Qaisar Grenoble INP 79

Page 98: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

6.2.1.2 NNRI (Nearest Neighbour Resampling Interpolation)

The nearest neighbour resampling interpolation is also known as the proximal interpolation. In

this case, the amplitude of a resampled observation xrn corresponding to a time instant trn is set

according to the following algorithm.

If (dtr1n dtr2n)

xrn = xn-1

elsexrn = xn

end

The process is further depicted on Figure 6.6.

dtr1n dtr2n

Xn-1 Xrn

Xn

tn-1 tn

dtn

trndtr1n dtr2n

Xn-1 Xrn

Xn

tn-1 tn

dtn

trn

Figure 6.6. Description of the NNRI process.

In Figure 6.6, (xn, tn) and (xn-1, tn-1) are the time-amplitude pairs of the nth and the (n-1)th level

crossing samples respectively. The values of dtr1n and dtr2n are given as follow.

11 &&# nnn ttrdtr (6.16)

nnn trtdtr &#2 (6.17)

Now if dtr1n is less than or equal to dtr2n, xrn is chosen equal to xn-1. Otherwise xrn is chosen

equal to xn. In other words, xrn is chosen equal to the level crossing sample amplitude, exists

nearest to it.

The interpolation process changes properties of the resampled signal compared to the original

one. The error depends on the interpolation technique used for the resampling purpose. In [121,

122], Waele and Broersen have categorized the interpolation methods as the simple and the

complex ones. The simple interpolation methods use only one non-uniform sample for one

resampled observation, such as the Sample & Hold (S&H) and the NNRI. Where as, the complex

interpolation methods, such as the linear and the cubic spline use more than one non-uniform

samples for each resampled observation.

Saeed Mian Qaisar Grenoble INP 80

Page 99: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

The simple interpolation methods use only one non-uniform sample for each resampled

observation. Hence, they are efficient in terms of the computational complexity. Moreover they

provide an unbiased estimate of the original signal variance [121, 122]. Due to this reason they

are also known as the robust interpolation methods.

It is obvious that the complex interpolation methods have higher computational complexity as

compare to the simple ones. Another disadvantage of the complex methods is that the variance

can be estimated erroneously. As the variance of resampled signal obtained with the linear

interpolation is lower compared to the original signal variance. It can be understood by

considering the fact that the linear interpolation is a weighted average of two non-uniform

observations. Similarly the cubic spline interpolation results in a higher variance resampled signal

as compare to the original one [121]. As we are looking for the computationally efficient

solutions, hence the above discussed facts justify paying more attention to the simple

interpolation methods.

In the case of S&H interpolation the value of resampled observation is set equal to the non-

uniform sample prior to it. Whereas, in the case of NNRI the value of resampled observation is set

equal to the nearest non-uniform sample (cf. Figure 6.6). As we are resampling the data

uniformly, so the interval Trn between two consecutive resampled data points is constant. The

value of Trn is different from the various intervals Tnj between the non-uniformly spaced samples,

used for the resampling. The phenomenon is shown by Figure 6.7. Here j = 1 for the NNRI and 2

for the S&H interpolation respectively. This difference between the values of Trn and Tnj causes

deviation between the properties of original and resampled signals. Higher will be the difference

more will be the deviation and vice versa [121, 122].

Xn-1

Xrn

Tn1

Trn

Xrn-1

Xrn+1

Xn-2

Xn

Xn+1

Tn2

Xn-1

Xrn

Tn1

Trn

Xrn-1

Xrn+1

Xn-2

Xn

Xn+1

Tn2

Figure 6.7. Trn is the interval between two resampled data points, Tn1 is the interval between the non-

uniform samples used for the NNRI and Tn2 is the interval between the non-uniform samples used for the

S&H interpolation. ‘o’ symbol is used for the non-uniform data and ‘+’ symbol is used for the resampled

data.

Note that in Figure 6.7, the signal is resampled by employing the NNRI. According to [121, 122],

the mean square deviation between Trn and Tn1 is smaller than that between Trn and Tn

2. The

phenomenon is further clear with the simulation results presented in Table 6.5.

Above discussion shows that among simple interpolation methods, NNRI performs better than

S&H. It tries to keep properties of the resampled signal closer to the original one compared to the

S&H. This is the reason of our inclination towards NNRI.

Saeed Mian Qaisar Grenoble INP 81

Page 100: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

6.2.1.3 Adaptive Shape Windowing

In Figure 6.5, the window selector block implements the condition, given by Expression 6.18. Its

output is the window decision Di, which drives the switch state for Wi.

! " *+

,-.

/ A&#) &

2

01

1

TttTdandLLIf i

end

ii

ref

i (6.18)

In expression 6.18, Li is the length of Wi in seconds. Lref is the reference window length in

seconds, which is chosen according to the input signal characteristics and the system resources

(cf. Section 5.1.1). Lref poses the upper bound on Li. t1i and tend

i-1 represent the 1st and the last

sampling instants of the ith and the (i-1)th selected windows respectively.

Jointly, the ASA and the window selector, provide an efficient spectral leakage reduction.

Usually an appropriate smoothening (cosine) window function is employed to reduce the signal

truncation (cf. Section 6.1.1). For the proposed case, as long as the condition 6.18 is true, the

leakage problem is resolved by avoiding the signal truncation [44]. As no signal truncation occurs

so no cosine window is required. In this case, Di is set to 1, which drives the switch to state 1 in

Figure 6.5. Otherwise an appropriate cosine window is employed to reduce the signal truncation.

In this case, Di is set to 0, which drives the switch to state 0 in Figure 6.5.

6.2.2 Illustrative Example

In order to illustrate the proposed spectral analysis technique and make its performance

comparison with the GDFT and the Lomb’s algorithm an input signal x(t), shown on the top of

Figure 6.8 is employed. Its total duration is 20 seconds and it consists of three active parts. The

summary of x(t) activities is given in Table 6.1.

Activity Signal Component Length (Sec)

1st 0.6.sin(2.pi.2.t) + 0.3.sin(2.pi.8.t) 2.0

2nd 0.45.sin(2.pi.3.t) + 0.45.sin(2.pi.70.t) 0.5

3rd

0.6.sin(2.pi.6.t) + 0.3.sin(2.pi.500.t) 0.25

Table 6.1. Summary of the input signal activities.

Table 6.1 shows that x(t) is band limited between 2 to 500 Hz. In this example x(t) is sampled by

employing a 3-bit resolution AADC [43, 95]. Thus, Fsmax and Fsmin become 7 kHz and 28 Hz

respectively (cf. Equations 5.4 and 5.5). Fref = 1.5 kHz is chosen, which satisfies the criterion

given in Section 6.2.1.1. The AADC amplitude range 2Vmax = 1.8 V is chosen, thus q becomes

0.2571 V in this case (cf. Equation 4.24).

For this example, the reference window length Lref = 1 second is chosen. It satisfies criteria

discussed in Section 5.1.1. The given Lref delivers Nmax = 14000 samples in this case (cf. Equation

5.7). The ASA delivers four selected windows for the whole x(t) span of 20 seconds, which are

Saeed Mian Qaisar Grenoble INP 82

Page 101: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

shown on the bottom part of Figure 6.8. First two selected windows correspond to the first

activity and the others correspond to the second and the third activities respectively. The first two

selected windows are not distinguishable because they lay consecutively on the first activity.

Therefore, in order to make them distinguishable lines are plotted on the boundaries of these

windows (cf. Figure 6.8). The selected windows parameters are displayed on Table 6.2.

0 2 4 6 8 10 12 14 16 18 20

-1

-0.5

0

0.5

1

Input Signal

0 2 4 6 8 10 12 14 16 18 20

-1

-0.5

0

0.5

1

Selected Signal

Time Axis

Am

pli

tud

e A

xis

Figure 6.8. The input signal (top) and the selected signal (bottom).

Selected Window Li (Sec) Ni (Samples) Fsi (Hz) Fref (Hz) Frsi (Hz) Nri (Samples)

W1 1.01 50 49 1500 49 50

W2 0.96 46 48 1500 48 46

W3

0.49 258 518 1500 518 258

W4

0.25 1072 4291 1 500 1500 375

Table 6.2. Summary of the selected windows parameters.

Table 6.2, exhibits the proposed spectral analysis technique interesting features. These are

achieved due to the smart combination of the non-uniform and the uniform signal processing

tools. Fsi represents the sampling frequency adaptation by following the local variations of x(t).

Ni shows that the relevant signal parts are locally oversampled in time with respect to their local

bandwidths. Hence, M = 3 is sufficient in the studied example for satisfying the Nyquist sampling

criterion locally for each selected window. Frsi shows the adaptation of the resampling frequency

for Wi. It further adds to the proposed technique computational gain by avoiding the unnecessary

interpolations during the resampling process. Nri shows that how the adjustment of Frsi avoids

Saeed Mian Qaisar Grenoble INP 83

Page 102: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

the processing of useless samples during the spectral computation. Li exhibits the ASA dynamic

feature, which is to correlate the window function length with the local variations of x(t).

The proposed technique also adapts the window shape (rectangle or cosine) for Wi. The condition

6.18 becomes false for W1 and W2, thus a signal truncation can occur in this case. Therefore, Di is

set to 0 and cosine (Hanning) windows are employed to reduce the truncation effect. On other

hand, the condition 6.18 remains true for the third and the fourth selected windows, thus Di is set

to 1. As no signal truncation occurs, so no cosine window is required in this case.

In the classical case, if the sampling frequency Fs is chosen equal to Fref, in order to satisfy the

Nyquist sampling criterion for x(t). Then the whole signal will be sampled at 1. 5 kHz, regardless

of its variations. Moreover, the windowing process is not able to select only the active parts of the

sampled signal. In addition, the window function length and shape remains static and is not able

to adapt with x(t) local variations. Therefore, to avoid the signal truncation an appropriate cosine

window function has to be employed for all data segments [117]. For this example if L = 1 second

is chosen, then it will lead to 20 windows of 1 second length for the total x(t) span of 20 seconds.

This static nature makes the classical system to process unnecessary samples and so causes an

increased computational activity than the proposed case.

6.2.2.1 Comparison of the Proposed Technique with the GDFT and the

Lomb’s Algorithm

A comparison of the proposed spectral analysis technique is made with the GDFT and the

Lomb’s algorithm in terms of the spectral quality and the computational complexity. It is done by

employing results of the above discussed example. The methodology employed for the

comparison is shown in Figure 6.9.

AADC EASA

Selected Signal

(xs,ts)

LCADC

Non-Uniformly

Sampled Signal

(xn,tn)

ASA Resampler

FFT

Uniformly

Sampled Signal

(xrn)

Resampling Frequency

(Frsi)

Wn

1

0

Window Decision

(Di)

Band-Limited

Analog Signal

x(t)

GDFT

Lomb’s

Algorithm

AADC EASA

Selected Signal

(xs,ts)

LCADC

Non-Uniformly

Sampled Signal

(xn,tn)

ASA Resampler

FFT

Uniformly

Sampled Signal

(xrn)

Resampling Frequency

(Frsi)

Wn

1

0

Window Decision

(Di)

Band-Limited

Analog Signal

x(t)

GDFT

Lomb’s

Algorithm

Figure 6.9. Block Diagram of the comparison methodology.

Figure 6.9 shows that in all cases the spectrum of the selected signal, obtained with the ASA is

computed. In the proposed case, firstly the selected signal is uniformly resampled and then

according to Di it is windowed (cf. Section6.2.1.3), before proceeding towards the FFT stage. On

other hand, the GDFT and the Lomb’s algorithm are directly employed on the selected signal.

Saeed Mian Qaisar Grenoble INP 84

Page 103: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

Spectra of the first selected window obtained with the GDFT, the Lomb’s algorithm and the

proposed technique are plotted on Figure 6.10. It shows that quality of the spectrum obtained with

the proposed technique is better than those obtained by the GDFT or the Lomb’s algorithm. For

the proposed case, the spectrum peaks corresponding to the signal components at fundamental

frequencies f0 and at periodic frequencies Fp ± f0 are visible on the bottom part of Figure 6.10. In

this case, Fp is set to Frs1 and its value is 49 Hz. Here, Frs1 is the resampling frequency of W1

decided by employing the procedure detailed in Section 6.2.1.1. On the contrary, the spectra

obtained with the GDFT and the Lomb’s algorithm display noise peaks. In the studied case, the

amplitude of this noise is even higher than the second component peak of the analyzed signal.

Thus, the required information is buried into the noise. Here, the second component points

towards the sinusoid of 8 Hz frequency. The noise level on the spectrum obtained by the Lomb’s

algorithm is however lower than that obtained by the GDFT, but it will always cause problems in

the proper signal analysis [44].

0 5 10 15 20 25 30 35 40 45 500

0.1

0.2

0.3

0.4Spectrum Obtained with the Proposed Technique

0 5 10 15 20 25 30 35 40 45 500

0.1

0.2

0.3

0.4Spectrum Obtained with the GDFT

0 5 10 15 20 25 30 35 40 45 500

0.1

0.2

0.3

0.4Spectrum Obtained with the Lomb Algorithm

Frequency Axis

Am

pli

tud

e A

xis

Figure 6.10. Spectra obtained by applying the GDFT (top), the Lomb’s algorithm (middle) and the

proposed technique (bottom).

The computational complexity comparison of the proposed technique with the GDFT and the

Lomb’s algorithm is also made. It is done by considering the number of operations executed to

perform the algorithm. For simplicity, it is assumed that each operation like addition, comparison,

multiplication etc. has equal computational cost.

Figure 6.9 shows that the comparison is applied on the selected data obtained with the ASA. In

the proposed case, the data laying in each selected window is resampled with NNRI. The process

of performing the NNRI is detailed below.

For each interpolation instant trn, the interval of non-uniform samples [tn; tn+1], within which trn

lays is determined. Then the distance of trn to each tn and tn+1 is computed and a comparison

among the computed distances is performed to decide the smaller among them. For Wi, the

complexity of the first step is Ni+Nri comparisons and the complexity of the second step is 2Nri

Saeed Mian Qaisar Grenoble INP 85

Page 104: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

additions and Nri comparisons. Hence, the NNRI total complexity for Wi becomes Ni+2Nri

comparisons and 2Nri additions.

Depending upon the window decision Di the window function shape is adapted for Wi (cf. Figures

6.5 and 6.9). If Di = 0, then a cosine window function is applied on the resampled data, which

performs Nri multiplications. For Di = 0, the weighting of windowed data is also required, which

will perform Nri multiplications. Finally, the spectrum of windowed signal is obtained by

computing its FFT. According to [41, 116] the complexity of FFT is Nri log2 Nri. Hence, the

combined computational complexity of the proposed technique C1, for the ith selected window is

given as follow.

iiiii NrNrNrNrNC 21 log.2..4 ! " (6.19)

Here, is a multiplying factor its value is 0 for Di = 1 and 1 for Di = 0.

The GDFT computational complexity is (Ni)2 [1, 2]. Here Ni is the number of samples laying in

Wi. In Lomb’s algorithm there are not only additions and multiplications but also four

trigonometric functions (cf. Equation 6.11). The operation count can easily reach several hundred

times of (Ni)2 [120]. The processing gain of the proposed technique over the GDFT or the Lomb’s

algorithm can be calculated by using the following equation.

# $

%

%

!

!

!I

i

iiiii

I

i

i

c

NrNrNrNrN

N

G

1

2

1

2

1

log..2.4

.

"

&

(6.20)

In Equation 6.20, ! is a multiplying factor. Its value is 1 for the GDFT and is in the range of

hundreds for the Lomb’s algorithm. Parameter i = 1,…, I is the selected windows index.

The computational gain of the proposed technique compared to the GDFT and the Lomb’s

algorithm is computed by employing results of the illustrative example. It is done individually for

each selected window and finally for the overall gain. The results are summarized in Table 6.3.

Selected Window GC1 over GDFT/Lomb’s Algorithm

W1 ! x 4.0

W2 ! x 3.6

W3

! x 19.8

W4

! x 198.9

Collective gain for all windows ! x 118.0

Table 6.3. Summary of the computational gain compared to the GDFT and the Lomb’s algorithm.

Saeed Mian Qaisar Grenoble INP 86

Page 105: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

Table 6.3 shows more than two order magnitude of the proposed technique over the GDFT and the

Lomb’s algorithm, collectively for all selected windows. This gain is achieved due to the joint

benefits of the non-uniform and the uniform signal processing tools. In combination they provide a

smart resampling frequency Frsi adaptation along with an efficient spectrum computation. It leads

to a significant reduction of the total number of operations of the proposed technique compared to

the GDFT and the Lomb’s algorithm.

6.2.2.2 Comparison of the Proposed Technique with the Classical

Approach

A comparison of the proposed spectral analysis technique with the classical one in terms of the

spectrum quality and the computational complexity is also made. The classical spectral analysis

methodology is shown in Figure 6.11.

ADC

Band-Limited

Analog Signal

x(t)

Uniformly

Sampled Signal

(xn) Window

Function

(wn)

Windowed Signal

(xwn)

FFT

X(f)

ADC

Band-Limited

Analog Signal

x(t)

Uniformly

Sampled Signal

(xn) Window

Function

(wn)

Windowed Signal

(xwn)

FFT

X(f)

Figure 6.11. Block Diagram of the classical spectral analysis system.

In the classical case, the sampling frequency and the window function length plus shape remains

time invariant [1, 2]. Thus, they have to be chosen for the worst case. In order to make an

appropriate comparison, the sampling frequency FS and the effective window function length L in

the classical case are chosen equal to Fref and Lref of the proposed case. This choice will provide 20

windows, each of one second length and 1500 samples, in the classical case.

If the same signal part, which lays in W1 do completely lay in one of 20 windows, in the classical

case. Then its spectrum can be plotted as follow.

0 500 1000 15000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Frequency Axis

Am

pli

tud

e A

xis

Spectrum Obtained in the Classical Case

Figure 6.12. Spectrum obtained in the classical case.

Saeed Mian Qaisar Grenoble INP 87

Page 106: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

The input signal peaks at fundamental and periodic frequencies are clear on Figure 6.12. This

spectrum is quite comparable with the one, shown on the bottom of Figure 6.10. It shows that

results delivered by the proposed technique are of similar quality for the studied example.

By zooming the spectrum, shown on the bottom part of Figure 6.10, we have found that

components corresponding to 2 Hz and 8 Hz lay at frequencies 1.9997 Hz and 7.9989 Hz

respectively. It shows that compared to the ideal spectrum the relative error in this case is 0.03 %

and 0.11% respectively. This error is due to the minor leakage produces by the proposed

approach. Such a minor leakage usually occurs in a practical system, because of its limit to only

process finite time length data sets. Such a limitation never allows the system to generate ideal

spectral impulses corresponding to the input signal components [115-117].

The main observable difference between these two spectra is their corresponding periodic frequency

FP. In the classical case, FP = FS remains unique for all windowed segments. Contrary in the

proposed case, FPP

i = Frsi can be specific for Wi (cf. Table 6.2). This adaptive feature further adds up

into the proposed technique computational efficiency (cf. Table 6.3).

Similar to the Section 6.2.2.1, a complexity comparison is also made between the proposed

technique and the classical one. In the classical case, if N is the number of samples laying in the

windowed segment, then the cosine windowing operation performs N multiplications between wn

and xn (cf. Equation 5.1). The windowed segment is weighted; it further requires N multiplications

[123]. Finally, the spectrum of the windowed data is obtained by computing its FFT. As the FFT

complexity is N log2 N so the overall computational complexity of the classical case C2 becomes as

follow.

# $NNNAC 22 log.2. ! (6.21)

Here, A is the total number of windows, occurs for the observation length of x(t).

For the proposed approach Fsi, Frsi and the window function length plus shape are not fixed and are

adapted for Wi. To achieve this adaptation, this approach locally requires some extra operations for

each selected window in comparison to the classical case. The ASA performs 2.Ni comparisons, Ni

increments and Ni additions for Wi (cf. Section 5.1.1). The choice of Frsi and window function

requires three comparisons. Later on the following three steps are performed. The selected signal is

resampled, if required then a cosine window is employed on it and finally its FFT is computed. The

complexity C1 of these last three steps is already deduced in Section 6.2.2.1 (cf. Equation 6.19).

Hence, the overall complexity of the proposed approach C3 is given as follow.

3log..2.4.5 23 ! iiiii NrNrNrNrNC " (6.22)

The computational gain of the proposed technique compared to the classical one Gc2 can be

calculated by computing the ratio between C2 and C3. It is done by employing results of the above

discussed example for different time spans of x(t). The gains are summarized in Table 6.4.

Saeed Mian Qaisar Grenoble INP 88

Page 107: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

Time Span (Sec.) GC2 = C2 / C3

L1 22.5

L2 24.7

L3

4.3

L4

1.9

Total x(t) span 23.4

Table 6.4. Summary of the computational gain compared to the classical approach.

The computational gain of the proposed technique over the classical one is clear from the above

results. In the case of W4, the resampling frequency is the same as in the classical case (cf. Table

6.2), yet a gain is achieved by the proposed approach. It is only due to the fact that the ASA

correlates the window length to the activity (0.25 second), while the classic case computes during

the total duration of L = 1 second. Gains are of course much larger in other windows, since the

proposed technique is taking benefit of processing fewer samples. When treating the whole x(t) span

of 20 seconds, the proposed technique also take advantage of the idle x(t) parts, which further

improves its gain compared to the classical case.

This efficiency is achieved due to the joint benefits of the non-uniform and the uniform processing

tools. The non-uniform tools employed in the proposed case are the LCSS, the AADC and the ASA.

They make to adapt the sampling frequency Fsi, the resampling frequency Frsi and the window

function length by following the local variations of x(t). This system parameters adaptation results

into a drastic reduction in the number of samples to be processed in the proposed case. Moreover,

on the uniform tolls side, the FFT is employed which provides an efficient spectral representation of

the resampled data. Collectively the LCSS, the AADC, the ASA and the FFT leads towards a drastic

reduction in the number of operations compared to the classical case (cf. Table 6.4).

6.2.2.3 Mean Square Deviation

According to Section 6.2.1.2, the mean square deviation between Trn and Tnj is smaller in the case of

NNRI than in the case of S&H [121, 122]. This phenomenon is studied by employing the

illustrative example. The obtained time deviations in both cases are summarized in Table 6.5.

Selected WindowMean Square Deviation

Between Trn and Tn1(NNRI)

Mean Square Deviation

Between Trn and Tn2(S&H)

W1 3.6e-5 1.8e-4

W2 3.6e-5 1.8e-4

W3

1.1e-5 4.9e-5

W4

9.9e-6 2.6e-5

Table 6.5. Comparison of the mean square deviation between Trn and Tnj for the NNRI and the S&H, for each

selected window.

Table 6.5 verifies the above statement. It follows that the NNRI keeps the resampled signal

properties more close to the original one, hence justifies its employment compared to the S&H in

the proposed system.

Saeed Mian Qaisar Grenoble INP 89

Page 108: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

6.2.2.4 Resampling Error

The resampling is performed in the proposed technique, which changes properties of the resampled

signal with respect to the original one [121, 122]. This error mainly contains of two effects. The

time-amplitude pairs uncertainties and the interpolation error. The time-amplitude pairs

uncertainties occur due to the AADC finite timer and threshold levels precisions [37, 38, 43]. These

uncertainties accumulate in the interpolation process and result into the final error [38, 46].

Considering the combine error effect the mean resampling error for Wi can be computed by

employing the following Equation.

%!

'!

iNr

n

nni

i xrxoNr

MRE1

.1

(6.23)

Where, xrn is the nth resampled observation, interpolated with respect to the time instant trn. xon is

the original sample value which should be obtained by sampling x(t) at trn. In the studied example

x(t) is analytically known, thus it is possible to compute its original samples values at any given

time instant. It allows computing the resampling error introduced by the proposed technique by

employing Equation 6.23. The mean interpolation error is calculated for each selected window. The

results are summarized in Table 6.6.

Selected Window W1 W2 W3 W4

MREi (dB) -21.8 -21.8 -20.5 -18.6

Table 6.6. Mean resampling Error for each selected window.

Table 6.6 shows that the error introduced by the resampling process is quite a minor one. The

maximum error occurs for W4 and it is -18.6 dB. In the case of high precision applications, the

resampling error can be further reduced by increasing the AADC resolution M and the

interpolation order [38, 46, 127, 146]. Thus, an increased accuracy can be achieved at the cost of

an increased computational load. Therefore, by making a suitable compromise between the

accuracy level and the computational load, an appropriate solution can be devised for a given

application.

These results also demonstrate an interesting feature of the AADC, which is to adapt its

resolution along with the input signal frequency variations. For the fixed resolution M and the

timer frequency FTimer, the AADC represents the low frequency signal components with high

precision and vice versa (cf. Section 4.2). This is the reason that the part of resampling error,

introduced by the time-amplitude pairs uncertainties is lower for the first two selected windows

compared to the third and the fourth one. It results into an overall reduced resampling error for

the first two selected windows compared to the remaining ones (cf. Table 6.6).

6.3 Conclusion

Spectral analysis is an elementary tool, employed almost in every recent field like biomedical,

astronomy, telecommunication, electronics, etc. Basic concepts of the spectral analysis and the

spectral leakage phenomenon have been reviewed. The process of analyzing the non-uniformly

Saeed Mian Qaisar Grenoble INP 90

Page 109: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 6 Spectral Analysis of the Non-Uniformly Sampled Signal

sampled signals is discussed. The GDFT and the Lomb’s algorithm are the commonly employed

techniques for this purpose. Their main concepts have also been discussed. It is shown that the

Lomb’s algorithm provides better spectrum quality compared to the GDFT, but possess a higher

computational load.

A novel technique for the spectral analysis of the level crossing sampled signals has been devised.

It adapts its sampling frequency Fsi for each selected window by following the x(t) local

variations. Criteria to choose the appropriate reference frequency Fref and reference window

length Lref heve been developed. A complete methodology of choosing the resampling frequency

Frsi and the window function length plus shape for the ith selected window has been

demonstrated.

A comparison of the proposed technique with the GDFT and the Lomb’s algorithm is made. It is

shown that the proposed approach outperforms the GDFT and the Lomb’s algorithm in terms of

spectral quality and processing cost.

The proposed approach is interesting for the time-varying signals. It is especially well adapted for

the signals which remain constant most of the time and vary sporadically. For such signals the

proposed technique can achieve a drastic computational gain over the classical one. It is

demonstrated with the help of an illustrative example. Results show a 23.4 times gain of the

proposed technique over the classical one. It is achieved due to the proposed technique smart

features, which are the activity selection, local features extraction and the smart spectral

representation. They result into a drastic reduction of the total number of operations and

consequently of the energy consumption compared to the classical case.

The proposed technique employment for analysing the speech signal will be described in Chapter

10. Development of the smart tools, which can further enhance the proposed technique

performance in terms of quality and processing efficiency, is a future work.

Saeed Mian Qaisar Grenoble INP 91

Page 110: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Chapter 7

SIGNAL DRIVEN ADAPTIVE RATE FILTERING

In a variety of important applications, it is of interest to change the relative amplitudes of the

frequency components in a signal or perhaps eliminate some frequency components entirely, a

process referred to as filtering [134]. The classical filtering techniques are time-invariant. They

process the input signal by employing a fixed order filter, which operates at a fixed sampling rate

[1, 2, 123]. As most of the real signals are time-varying in nature, so the time-invariant nature of

classical filtering results into an increased processing activity [45-47]. This classical filtering

shortcoming can be resolved up to a certain extent by employing the multirate filtering techniques

[128-132]. Following the multirate filtering principle, the adaptive rate filtering techniques are

devised [45-47, 50, 51, 146]. They are based on the principle of activity selection and local

features extraction. It makes them to adapt their sampling frequency and filter order by following

the input signal local variations. This idea leads towards a drastic gain in computational

efficiency and hence in processing power of the proposed system compared to the classical

approach.

The classical filtering process is briefly reviewed in Section 7.1. Section 7.2, describes the

multirate filtering principle. A detailed description of the proposed techniques is given in Section

7.3. Section 7.4, describes the proposed techniques features with the help of an illustrative

example. The computational complexities of the proposed techniques are deduced and compared

among and with the classical one in Section 7.5. Section 7.6, deals with the processing error. In

Section 7.7, methods to enhance the proposed techniques performance are described. A system

level architecture, common to the proposed filtering techniques is proposed in Section 7.8.

Section 7.9, finally concludes the chapter.

7.1 The Filtering Process

Filtering is an elementary operation almost employed in every signal processing system. Its

function is to reduce unwanted signal parts, such as random noise or to enhance useful signal

parts, such as components laying within a certain frequency range. A good range of filters is

available in literature like Chebyshev, Butterworth, Bassel, windowed sinc, moving average, etc.

For a targeted application an appropriate filter should be tactfully chosen, in order to achieve the

desired results.

Mainly filters are splitted into two classes, the analog and the digital. An analog filter uses

electronic components such as resistors, capacitors and operational amplifiers to produce the

required filtering effect. At all stages, the signal being filtered is an electrical voltage or current

which is a direct analogue of the physical quantity. A digital filter employs a digital processor to

perform numerical calculations on digitized signal values. The processor may be a general

Saeed Mian Qaisar Grenoble INP 92

Page 111: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

purpose computer or a specialized DSP (Digital Signal Processor). In a digital filter the signal is

represented by a sequence of numbers, rather than a voltage or current.

There are many applications where analog filters are used. This is not related to the actual

filtering performance but to the general features that can be achieved only with the analog

circuits. The first advantage is speed, digital is slow while analog is fast. The second inherent

advantage of analog over digital is the amplitude and the frequency dynamic range.

With the recent technological advancements, the digital filters are rapidly replacing the analog

ones. It is due to the fact that for the same task the digital filters can achieve far superior results

than their analog counter parts [123]. The main advantages of digital over analog filters are their

programmability, robustness against temperature variations, lower achievable pass-band ripples,

linear phase characteristics, sharper achievable roll-off and stop-band attenuation, potential to

achieve higher SNR etc [123].

The digital filters can be realized in a recursive or a non-recursive manner. Another frequently

employed terminology for the recursive and the non-recursive filters is the IIR (Infinite Impulse

Response) and the FIR (Finite Impulse Response) respectively. This chapter is dealing with the

FIR filters. The main advantages of the FIR over the IIR filters are the linear phase and the

unconditional stability [123, 134]. These characteristics make them preferable for certain

applications like equalizers, multirate processing, wavelets, etc.

If hk is the FIR filter impulse response, then in time domain the filtering is performed by

convolving hk with the digitized input signal xn. The process can be formally represented as

follow.

!

"!!P

k

knknnn xhxhy0

.* (7.1)

Here, yn is the filtered signal. k is indexing the filter coefficients. P is the filter length, which is

the number of previous input samples used to calculate the current output. As the convolution in

time domain becomes the multiplication in frequency domain [115], so the filtering operation in

frequency domain can be expressed as follow.

# $ # $ # $fXfHfY .! (7.2)

Here Y(f) and X(f) are obtained by computing the DFT of yn and xn respectively. H(f), is the FIR

filter frequency response and is obtained by computing the DFT of hk. hk by employing the time

domain convolution or H(f) by employing the frequency domain multiplication, shapes the input

signal spectrum in a multiplicative fashion. It performs filtering by emphasizing some frequency

components and by attenuating some others.

The classical FIR filtering is a time-invariant operation [134]. In this case, the input signal is

sampled at a fixed frequency regardless of its local variations. A unique fixed order filter is

employed in order to filter the sampled data. As both the sampling frequency and the filter

remains unique, so they have to be chosen for the worst case. This time-invariant nature of the

classical filtering causes a useless increase of the computational load. This drawback of classical

filtering can be resolved up to a certain extent by employing the multirate filtering approaches

[128-132]. The principle of multirate filtering technique is described in the following section.

Saeed Mian Qaisar Grenoble INP 93

Page 112: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.2 The Multirate Filtering

The multirate filters change the input data rate at one or more intermediate points within the filter,

while maintaining an output rate that is identical to that of the input data. These filters can

achieve both reduced filter orders and processing rates as compared to the standard single rate

filter designs. It results into an efficient and easy solution of an otherwise difficult problem [128,

131, 132].

It is known that for fixed design parameters (cut-off frequency, transition-band width, pass-band

and stop-band ripples) the FIR filter order varies as a function of the operational sampling

frequency. For high sampling frequency, the order is high and vice versa. It leads to the concept

of changing the sampling rate downward (decimation); filtering the signal and then changing the

sampling rate upward (interpolation) to the original value. Decimation and interpolation are the

fundamental operations of a multirate filtering approach. They make to achieve the following

ideas.

Reduce the sampling rate (less number of samples to process).

Use simple, low order filters (less number of operations per filtered output).

It results into a computational gain compared to the classical approach which employ a unique

high order filter, operates at a high sampling rate [128-132]. Concepts of decimation and

interpolation are described in the following subsections.

7.2.1 Decimation

Decimation is the process of down sampling a signal to decrease its effective sampling rate. A

reduction in the sampling rate by a factor D is achieved by keeping every Dth sample from the

input sequence xn. While discarding D-1 of every D input samples, reduces the original sample

rate FS by a factor of D. It causes the input frequencies above one half of FS /D to be aliased into

the in-band spectrum Bwin of the decimated signal xdn. Here, Bwin of xdn ranges between

[0; FS /(2.D)]. To mitigate this aliasing effect, the input signal must be low-pass filtered prior to

the down sampling process. In order to achieve the alias free output the low-pass filter cut-off

frequency FC should be chosen less than or at most equal to FS /(2.D). The process of decimating

xn in order to achieve xdn is formally expressed by Equation 7.3 and the decimator block diagram

is shown by Figure 7.1. If N is the length of xn then the length of xdn will be N/D.

nDn xxd .! (7.3)

hdk Dxn

xfnxdn

Decimator

hdk DDxn

xfnxdn

Decimator

Figure 7.1. The decimation process.

Saeed Mian Qaisar Grenoble INP 94

Page 113: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

A benefit of the decimation process is that the low-pass filter may be designed to operate at the

decimated sample rate rather than at the faster input data rate. It can be done by using a FIR filter

structure and by noting that the outputs associated with the D-1 discarded sample need not to be

computed. The process is formally expressed as follow.

!

"!DP

k

knDkn xhxd0

.. (7.4)

Here, PD is the order of filter designed to operate at FS /D. Note that for the same design

specifications (cut-off frequency, transition-band width, pass-band and stop-band ripples), if P is

the filter order, designed to operate at FS. Then the relation between PD and P is: PD=P/D.

By considering Equation 7.4, it is clear that the filter is in fact using the down sampled signal.

Thus, operations of down sampling and the low-pass filtering have been embedded. It is done in

such a way that both the number of samples and computations to generate one filtered output are

reduced by a factor D. It shows how the decimation can lead towards a computationally efficient

solution [131, 132].

7.2.2 Interpolation

The process of increasing the sampling rate is called interpolation or up sampling. An up

sampling by a factor of U is achieved by inserting U-1 interpolated observations between each

input sample. In the simplest case, each interpolated observation is of zero value. Up sampling

introduces images of the input xn spectrum into the interpolated output xun spectrum, between the

original frequency FS and the higher interpolated frequency FuS. To overcome this effect, xun

must be low-pass filtered to remove image frequencies which will disturb the subsequent signal

processing steps. The process of interpolating xn in order to achieve xun is formally expressed by

Equation 7.5 and the interpolator block diagram is shown in Figure 7.2. If N is the length of xn

then the length of xun will be N.U.

U

nn xxu ! (7.5)

hukUxn

xûn

Interpolator

xun

hukUUxn

xûn

Interpolator

xun

Figure 7.2. The interpolation process.

A benefit of the zero padded interpolation is that the low-pass filter may be designed to operate at

the input sample rate, rather than the faster output sample rate. It can be done by using a FIR filter

structure and by noting that the input associated with the U-1 inserted samples has zero value.

The process is formally expressed as follow.

Saeed Mian Qaisar Grenoble INP 95

Page 114: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

!

"!UP

k

knkn uxhuxu0

^

. (7.6)

Here, PU is the order of filter designed to operate at FuS/U. Note that for the same design

specifications (cut-off frequency, transition-band width, pass-band and stop-band ripples), if P is

the filter order, designed to operate at FuS. Then the relation between PU and P is: PU = P / U.

By considering Equation 7.6, one can understand that the filter is in fact employing the original

data samples xn. Thus, operations of up sampling and the anti-image filtering have been

embedded. It is done in such a way that both the number of samples and computations to generate

one filtered output are reduced by a factor U. It is achieved by taking advantage that U-1 out of U

samples have zero values [131, 132].

Figures 7.1 and 7.2 symbolise the decimation and the interpolation processes respectively. From

these Figures, it appears that the decimation and the interpolation are carried out in a single step.

However, in practice these operations are mostly carried out in multiple stages [128-130, 132].

In its simplest form the multirate filtering approach splits a time-invariant classical FIR filter with

an anti-aliasing filter and down sampler followed by an up sampler and anti-imaging filter. The

process is shown in Figure 7.3.

hdk Dxn

xfnxdn

Decimator

hukUn

Interpolator

yn

hk

xnyn

hdk Dxn

xfnxdn

Decimator

hukUn

Interpolator

yn

hdk DDxn

xfnxdn

Decimator

hukUUn

Interpolator

yn

hk

xnyn

hk

xnyn

Figure 7.3. The classical time-invariant FIR filter model (top) and its equivalent multirate FIR filter model

(bottom).

Here, hdk and huk are specified by using hk specifications. The employment of reduced order hdk

and huk, which operate at reduced sampling rates are keys that ensure the multirate filtering

computational efficiency compared to the classical approach [131].

7.3 The Adaptive Rate Filtering

Following the idea of multirate filtering approach, the adaptive rate filtering techniques are

devised [45-47, 50, 51, 146]. The term adaptive rate points towards the proposed techniques

smart feature, which is to correlate the system parameters according to the input signal local

Saeed Mian Qaisar Grenoble INP 96

Page 115: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

variations. This signal driven nature of the proposed techniques is achieved by smartly combining

features of both non-uniform and uniform signal processing tools.

The LCADC, the ASA and the resampler form the proposed approach basis (cf. Figure 7.4). The

input analog signal x(t) is acquired with the LCADC. The non-uniformly sampled signal obtained

at the LCADC output can be used directly for the digital filtering [13, 52]. However, in the

studied case, the non-uniformity of the level crossing sampled signal is employed:

For selecting only the relevant parts of the non-uniformly sampled signal.

For extracting the input signal local features and adapting the system parameters

accordingly.

The process of activity selection and the local features extraction is named as the ASA (cf.

Chapter 5). Finally the selected signal is resampled uniformly before proceeding towards a

classical filtering operation. In combination the LCADC the ASA and the resampler make to

achieve the following.

Adaptive rate sampling (only relevant number of samples to process).

Adaptive rate filtering (only relevant number of operation to deliver per filtered output).

The achievement of above defined goals assures a drastic computational gain of the proposed

filtering techniques compared to the classical ones. The steps to realize it are detailed in the

following subsections.

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local

Parameters for

(Wi)

Filtered Signal

(yn)

LCADC ASA

Parameters

Adaptor for

(Wi)

Adapted Parameters for

(Wi)Reference

Parameters Adapted FIR Filter

for (Wi)

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local

Parameters for

(Wi)

Filtered Signal

(yn)

LCADC ASA

Parameters

Adaptor for

(Wi)

Adapted Parameters for

(Wi)Reference

Parameters Adapted FIR Filter

for (Wi)

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Figure 7.4. Block diagram of the proposed adaptive rate filtering techniques.

7.3.1 Adaptive Rate Sampling

The process of achieving the adaptive rate sampling is similar in the proposed techniques. It is

realized by employing the interesting features of the LCADC and the ASA [44-47, 50, 51, 146].

From Chapter 4, it is clear that the LCADC sampling frequency is correlated to the input signal

x(t) local variations. The higher is the input signal frequency, the more quickly it varies and hence

more thresholds it crosses in a given time period. This is the reason why the high frequency x(t)

Saeed Mian Qaisar Grenoble INP 97

Page 116: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

parts are acquired at higher rates and vice versa [45-47]. On the other hand, no sample is taken

for the static signal parts. This approach is especially well suited for signals, which remain static

most of the time and vary sporadically during brief moments like electrocardiograms,

phonocardiogram, seismic signals etc. For such signals, the average sampling frequency in the

proposed case remains less than the required sampling frequency in the classical case. This smart

sampling reduces the system activity and at the same time improves the accuracy of the signal

acquisition processes [45, 46].

The non-uniformly sampled signal obtained at the LCADC output is selected and windowed by

the ASA (cf. Chapter 5). Let Fsi represents the sampling frequency for the ith selected window Wi.

Fsi can be specific depending upon Li and the slope of x(t) part laying within Wi [44]. According

to Section 5.2, Fsi can be computed as follow.

i

ii

L

NFs ! (7.7)

Here, Ni is the number of samples laying in Wi and Li is the length in seconds of Wi. The upper

and the lower bounds on Fsi are posed by the system maximum and minimum sampling

frequencies, Fsmax and Fsmin respectively (cf. Section 5.1). For the chapter continuation Fsmax and

Fsmin are redefined below by Equations 7.8 and 7.9 respectively.

)12.(.2 maxmax "! MfFs (7.8)

)12.(.2 minmin "! MfFs (7.9)

In order to perform a classical filtering algorithm, the selected signal laying in Wi is uniformly

resampled before proceeding to the filtering stage (cf. Figure 7.4). Characteristics of the selected

signal part, laying in Wi are employed to choose its resampling frequency Frsi. Once the

resampling is done, there are Nri samples in Wi. The choice of Frsi is critical and this procedure is

detailed in the following.

The way of realizing the adaptive rate filtering is distinct in each proposed technique. The

procedure for each technique is detailed in the following subsections.

7.3.2 The ARCD (Activity Reduction by Chosen Filter Decimation)

Technique

The block diagram of the ARCD technique is shown in Figure 7.5. It is splitted into two filtering

cases, which will be described in Sections 7.3.2.1 and 7.3.2.2 respectively. In this case, a

reference filters bank is offline designed for a specific application, by exploiting x(t) statistical

characteristics. While designing reference filters the worst case is taken into account. Here, the

worst case points towards the system maximum possible sampling frequency Fsmax (cf. Equation

7.8).

The bank of reference filters with appropriate specifications is designed for a set of reference

frequencies Fref. Fsmax defines the upper bound on Fref. The choice of lower bound on Fref

depends upon the filter transition band and the effective value of Fsmin. Let [Fcmin; Fcmax] defines

the filter transition band. Now, if the condition: Fsmin 2.Fcmax, becomes true then Fsmin is chosen

Saeed Mian Qaisar Grenoble INP 98

Page 117: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

as the lower bound on Fref. In opposite case, an appropriate Fref lower bound can be chosen,

depending upon the application requirements.

By supposing that Fsmin defines the lower bound on Fref, the range within which different Fref

elements lay is [Fsmin ; Fsmax]. The elements of Fref can be distributed in various ways within this

defined range. In the studied case, they are placed uniformly apart. If Q is the length of Fref, then

the process of calculating the complete set is clear from Equation 7.10.

% &maxminminminmin ).1(,.....,.2,, FsQFsFsFsFsFref !'"('('(! (7.10)

In Equation 7.10, is an offset and its value can be calculated by using Equation 7.11.

1

minmax

"

"!'

Q

FsFs (7.11)

Frefc

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(x t )n, n

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator of

(hck)Coefficients

Scalar of (hij)

FIR Filter forWiLCADC ASA

Filter

Selector

Chosen Reference Filtrer (hck)

Decim

ate

d &

Sca

led

Filtre

r (hij )

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Fref

Filtering Case

Selector

FDi

hck

Frsi

Di

0

1

Frefc

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(x t )n, n

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator of

(hck)Coefficients

Scalar of (hij)

FIR Filter forWiLCADC ASA

Filter

Selector

Chosen Reference Filtrer (hck)

Decim

ate

d &

Sca

led

Filtre

r (hij )

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Fref

Filtering Case

Selector

FDi

hck

Frsi

Di

0

1

Figure 7.5. Block diagram of the ARCD technique. ‘___’ represents the common blocks and the signal flow

used in both filtering cases, ‘…..’ represents the signal flow used only in the 1st filtering case and‘----’

represents blocks and the signal flow used only in the 2nd filtering case.

During the online processing an appropriate reference filter is chosen for Wi from the pre-

computed filters bank. This choice is made on the basis of Fref and the effective value of Fsi. Let

us introduce here the index notation c in order to make the distinction between the chosen

reference filter and the complete set of reference filters. The reference filter whose corresponding

value of Frefc is the closest and greater or equal to Fsi is chosen for Wi. It makes to select the

relevant order filter for the resampled data laying in Wi.

During online computation, Frefc and the local sampling frequency Fsi of Wi are used to define

the local resampling frequency Frsi and a decimation factor di. The Frsi is employed to uniformly

resample the selected signal laying in Wi, where as di is employed to decimate hck for filtering Wi.

Here, hck represents the chosen reference filter for Wi. k is the index of chosen reference filter

coefficients.

hck is sampled at Frefc during offline processing. Frefc and Frsi should match in order to perform

a proper filtering operation. The approach of keeping Frefc and Frsi coherent leads towards two

different filtering cases, which are explained in the following subsections.

Saeed Mian Qaisar Grenoble INP 99

Page 118: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.3.2.1 1st Filtering Case

This case holds if Fsi is equal to Frefc. In this case, the filtering decision for the ith selected

window FDi is set to 0 and it drives the switch to state 0 (cf. Figure 7.5). hck remains unaltered

and Frsi is chosen equal to Frefc.

7.3.2.2 2nd

Filtering Case

The 2nd Filtering case is valid if the condition given by Expression 7.12 is true.

c

i FrefFs ) (7.12)

For this case, FDi is set to 1 and it drives the switch to state 1 (cf. Figure 7.5). In this case, Frsi is

chosen equal to Fsi and hck is online decimated in order to match Frefc to Frsi. This decimation of

hck results into a reduced number of operations to deliver a filtered sample [45-47]. Hence, it

improves the proposed techniques computational efficiency.

Let x(t) Nyquist sampling frequency be: FNyq = 2.fmax. Then in the case when Frsi is less than FNyq,

it appears that the data laying in Wi is resampled at a frequency, which will result into an aliased

resampled signal. In the studied case, x(t) amplitude dynamics !x(t)are always adapted to match

the amplitude range of the LCADC 2.Vmax. According to [45, 46], if the local signal amplitude is

of the order of the LCADC amplitude range 2.Vmax, then for a suitable choice of the LCADC

resolution M (application dependent) the signal crosses enough consecutive thresholds. Therefore,

it is locally oversampled in time with respect to its local bandwidth. Hence, there is no aliasing

problem. This statement is further clear with results summarized in Table 7.4.

In order to decimate hck the decimation factor di for Wi is online calculated by employing

Equation 7.13.

i

ci

Frs

Frefd ! (7.13)

di can be specific for each selected window depending upon Frefc and Frsi. The process of

decimating hck is specific for an integral or a fractional value of di. Therefore, a test on di is made

by computing Di = floor(di) and verifying if (Di = di). Here, floor operation delivers only the

integral part of di. If the answer is yes, then hck is decimated with Di, as expressed in Equation

7.14.

kD

i

j ihch.

! (7.14)

Equation 7.14, shows that the decimated filter impulse response for the ith selected window hji is

obtained by picking every (Di)th coefficient from hck. Here, j is indexing the decimated filter

coefficients. If the order of hck is Pc, then the order of hji is given as: Pi = Pc /D

i.

In the case when di is fractional, the ARCD technique converts di into Di. Various methods can be

adapted to deal with the fractional value of di and to keep the sampling frequency of the

decimated filter coherent with the resampling frequency of Wi. The employed procedure is

depicted on Figure 7.6.

Saeed Mian Qaisar Grenoble INP 100

Page 119: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hck

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hck

Figure 7.6. Flowchart of the ARCD technique.

In Figure 7.6, values of di and Frsi are correlated. First di

is calculated by using Frsi and then a

decision is made on basis of di, whether an adjustment of Frsi is required or not. If di is an integer

then keep the same Frsi. Otherwise an increment in Frsi is made depending on the fractional part

of di. The reason behind incrementing Frsi is that in this case, Di < di hence, it makes Frsi >Fsi. It

fulfils both above stated goals of converting di into an integer and the sampling frequency of the

decimated filter coherent with Frsi.

It is clear that for any value of di (integer or float), the decimation of hck is achieved with Di in the

ARCD technique. A simple decimation causes a reduction of the decimated filter energy

compared to the reference one. It will lead to an attenuated version of the filtered signal. Di is a

good estimate of the ratio between the energy of the chosen reference filter and that of the

decimated one. Thus, this effect of decimation is compensated by scaling hji with Di (cf. Equation

7.15).

kD

ii

j ihcDh.

.! (7.15)

7.3.3 The ARCR (Activity Reduction by Chosen Filter Resampling)

Technique

The block diagram of the ARCR technique is shown in Figure 7.7. Steps for achieving the

adaptive rate filtering are common in both the ARCR and the ARCD techniques, except for the

fractional value of di. In the ARCD technique the fractional di is converted into an integral Di,

which is later on employed for decimating hck. Contrary, in the ARCR technique the fractional di

is directly employed to decimated hck.

Saeed Mian Qaisar Grenoble INP 101

Page 120: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Adapted

Frefc

Frefc

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(x t )n, n

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator/Resamplor

of (hck)Coefficients

Scalar of (hij)

FIR Filter forWiLCADC ASA

Filter

Selector

Chosen Reference Filtrer (hck)

Decim

ate

d &

Scale

d

Filtre

r (hij )

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Fref

Filtering Case

Selector

FDi

hck

Frsi

di

0

1

Adapted

Frefc

Frefc

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(x t )n, n

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator/Resamplor

of (hck)Coefficients

Scalar of (hij)

FIR Filter forWiLCADC ASA

Filter

Selector

Chosen Reference Filtrer (hck)

Decim

ate

d &

Scale

d

Filtre

r (hij )

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Fref

Filtering Case

Selector

FDi

hck

Frsi

di

0

1

Figure 7.7. Block diagram of the ARCR technique. ‘___’ represents the common blocks and the signal flow

used in both filtering cases, ‘…..’ represents the signal flow used only in the 1st filtering case and‘----’

represents blocks and the signal flow used only in the 2nd filtering case.

In the case of ARCR technique Frsi is given as: Frsi= Fsi /d

i, so it remains equal to Fsi. The

process of matching Frefc with Frsi requires a fractional decimation of hck, which is achieved by

resampling hck at Frsi. For the ARCR technique hji scaling is performed with di. The complete

procedure of obtaining Frsi and hji for the ARCR is described in Figure 7.8.

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

hji = Resample(hck @ Frsi)

Frsi = Fsi

hji = hck

hji =di . hj

i

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

hji = Resample(hck @ Frsi)

Frsi = Fsi

hji = hck

hji =di . hj

i

Figure 7.8. Flowchart of the ARCR technique.

7.3.4 The ARRD (Activity Reduction by Reference Filter Decimation)

Technique

The ARRD technique is a modification of the ARCD technique. In this case, a single pre

designed reference FIR filter is employed instead of a bank of reference FIR filters, employed in

the ARCD case (cf. Figure 7.5). It results into two advantages compared to the ARCD technique.

Firstly, it reduces the system memory requirements and secondly, it avoids the online filter

selection process for Wi.

Saeed Mian Qaisar Grenoble INP 102

Page 121: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

The block diagram of the ARRD technique is shown by Figure 7.9. The principle of ARRD is to

offline design a reference FIR filter for a reference sampling frequency Fref, which satisfies the

Nyquist sampling criterion for x(t). The process is clear from Expression 7.16.

max.2 fFF Nyqref !* (7.16)

Here, Fref is a single reference frequency, whereas Fref in equation 7.10 represents a set of

reference frequencies. In order to make them distinguishable, the term ‘ref’ is indexed in

Expression 7.16.

Fref

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator of

(hk)Coefficients

Scalar of (hij)

FIR Filter forWi

LCADC ASA

Reference FIR Filter

(hk)

Reference Filtrer (hk)

Decim

ate

d &

Sca

led

Filtre

r (hij )

Filtering Case

Selector

FDi

hk

Frsi

Di

0

1

Fref

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator of

(hk)Coefficients

Scalar of (hij)

FIR Filter forWi

LCADC ASA

Reference FIR Filter

(hk)

Reference Filtrer (hk)

Decim

ate

d &

Sca

led

Filtre

r (hij )

Filtering Case

Selector

FDi

hk

Frsi

Di

0

1

Figure 7.9. Block diagram of the ARRD technique. ‘___’ represents the common blocks and the signal flow

used in both filtering cases, ‘…..’ represents the signal flow used only in the 1st filtering case and‘----’

represents blocks and the signal flow used only in the 2nd filtering case.

In the ARRD technique, Frsi is chosen by employing values of Fref and Fsi. Frsi can be specific

for Wi depending upon Fsi. The reference filter impulse response hk is sampled at Fref during

offline processing. Here, k is indexing the reference filter coefficients. Fref and Frsi should match

in order to perform a proper online filtering. The approach of keeping them coherent is explained

below.

7.3.4.1 1st Filtering Case

This case holds if the following condition becomes true.

ref

i FFs * (7.17)

For this case, FDi is set to 0 and it drives the switch to state 0 (cf. Figure 7.9). Frsi is chosen equal

to Fref and hk remains unchanged. This choice of Frsi makes to resample the selected data, laying

in Wi closer to the Nyquist rate. Hence, it improves the proposed technique computational

efficiency in two ways. Firstly, by avoiding the unnecessary interpolations during the data

resampling process and secondly, by avoiding the processing of unnecessary samples during the

post filtering process.

Saeed Mian Qaisar Grenoble INP 103

Page 122: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.3.4.2 2nd

Filtering Case

In this case, the following condition becomes true.

ref

i FFs ) (7.18)

For this case, FDi is set to 1 and it drives the switch to state 1 (cf. Figure 7.9). Here, Frsi is chosen

equal to Fsi and hk is online decimated in order to match Fref to Frsi. This online decimation

reduces the reference filter order for Wi, which reduces the number of operations to deliver a

filtered sample [45-47]. Hence, it improves the computational efficiency.

Similar to the ARCD technique, in the ARRD case, a decimation factor di for Wi is online

calculated by employing the following Equation.

i

refi

Frs

Fd ! (7.19)

Di=floor(di) is computed and later on it is verified that either Di=di or not. If the case holds then

hk is decimated with Di, the process is clear from Equation 7.20. Here, hji is the decimated filter

impulse response for Wi.

kD

i

j ihh.

! (7.20)

For a fractional di, hk is again decimated by employing Di. It calls for an increment of Frsi in

order to keep Fref and Frsi coherent. Frsi is incremented as: Frsi = Fsi /D

i. Similar to the ARCD

technique in the ARRD case, the online hk decimation effect is compensated by employing Di.

The process is clear from the following Equation.

kD

ii

j ihDh.

.! (7.21)

The complete procedure of obtaining Frsi and hji for the ARRD is described by Figure 7.10.

di = Fref / Frsi

Di = floor(di)

Di = di

hji =Di .hD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hk

di = Fref / Frsi

Di = floor(di)

Di = di

hji =Di .hD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hk

Figure 7.10. Flowchart of the ARRD technique.

Saeed Mian Qaisar Grenoble INP 104

Page 123: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.3.5 The ARRR (Activity Reduction by Reference Filter Resampling)

Technique

Similar to the ARRD, the ARRR is a modification of the ARCR technique. Its block diagram is

shown in Figure 7.11. The process of achieving the adaptive rate filtering is common in both the

ARRR and the ARRD techniques except for the fractional value of di. Contrary to the ARRD, the

ARRR decimates hk by employing the fractional di.

Adjusted

Fref

Fref

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator/Resamplor of

(hk)Coefficients

Scalar of (hij)

FIR Filter forWi

LCADC ASA

Reference FIR Filter

(hk)

Reference Filtrer (hk)

Decim

ate

d &

Scale

d

Filtre

r (hij )

Filtering Case

Selector

FDi

hk

Frsi

di

0

1

Adjusted

Fref

Fref

Band Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

Resampler

Selected Signal

(xs,ts)

Uniformly

Sampled Signal

(xrn, trn)

Fsi

Filtered

Signal

(xfn)

Decimator/Resamplor of

(hk)Coefficients

Scalar of (hij)

FIR Filter forWi

LCADC ASA

Reference FIR Filter

(hk)

Reference Filtrer (hk)

Decim

ate

d &

Scale

d

Filtre

r (hij )

Filtering Case

Selector

FDi

hk

Frsi

di

0

1

Figure 7.11. Block diagram of the ARRR technique. ‘___’ represents the common blocks and the signal

flow used in both filtering cases, ‘…..’ represents the signal flow used only in the 1st filtering case and‘----’

represents blocks and the signal flow used only in the 2nd filtering case.

In the case of ARRR technique Frsi is given as: Frsi= Fsi /d

i, so it remains equal to Fsi. The

process of matching Fref with Frsi requires a fractional decimation of hk, which is achieved by

resampling hk at Frsi. In order to compensate the decimation effect in the ARRR technique, hji

scaling is performed with di. The complete procedure of obtaining Frsi and hji for the ARRR

technique is described by Figure 7.12.

di = Fref / Frsi

Di = floor(di)

Di = di

hji =di .hd

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

hji = Resample(hk @ Frsi)

Frsi = Fsi

hji = hk

hji =di . hj

i

di = Fref / Frsi

Di = floor(di)

Di = di

hji =di .hd

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

hji = Resample(hk @ Frsi)

Frsi = Fsi

hji = hk

hji =di . hj

i

Figure 7.12. Flowchart of the ARRR technique.

Saeed Mian Qaisar Grenoble INP 105

Page 124: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.4 Illustrative Example

In order to illustrate the interesting features of the proposed filtering techniques, an input signal

x(t) shown on the left part of Figure 7.13 is employed. Its total duration is 20 seconds and it

consists of three active parts. Summary of x(t) activities is given in Table 7.1.

Activity Signal Component Length (Sec)

1st 0.5.sin(2.pi.20.t) + 0.4.sin(2.pi.1000.t) 0.5

2nd 0.45.sin(2.pi.25.t) + 0.45.sin(2.pi.150.t) 1.0

3rd 0.6.sin(2.pi.15.t) + 0.3.sin(2.pi.100.t) 1.0

Table 7.1. Summary of the input signal active parts.

Figure 7.13. The input signal (left) and the selected signal obtained with the ASA (right).

In Figure 7.13, x-axis is representing the time in seconds and y-axis is representing the input

signal amplitude in volts. Table 7.1 shows that x(t) is band limited between fmin = 15Hz and

fmax = 1000 Hz. In this case, x(t) is digitized by employing a 3-bit resolution AADC [43, 95].

Thus, for given M the corresponding minimum and maximum sampling frequencies are

Fsmin=210 Hz and Fsmax=14000 Hz (cf. Equations 7.8 and 7.9). The AADC amplitude range

2Vmax=1.8 V is chosen, which results into a quantum q=0.2571 V (cf. Equation 4.24).

From Table 7.1, it is clear that each x(t) active part has a low and a high frequency component.

Let us assume that for each activity the low frequency component is the signal of interest and the

high frequency component is the unwanted noise. Therefore, in order to filter out the high

frequency component from each activity the following procedures are adopted in the different

proposed filtering techniques.

In case of the ARCD and the ARCR techniques, a bank of eleven reference low-pass filters is

implemented by employing the standard Parks-McClellan algorithm. The chosen Fcmax is equal to

80 Hz in this case, hence the condition: Fsmin 2.Fcmax, remains true for this example. Following

it, Fref can be calculated by employing Equation 7.10. The value of offset is calculated by

employing Equation 7.11 and it is 1379 Hz for this example. Parameters of the reference filters

bank are summarized in Table 7.2. Where, Pc is the chosen reference filter order for Wi.

Saeed Mian Qaisar Grenoble INP 106

Page 125: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Cut-off Freq

(Hz)

Transition Band

(Hz)

Pass-Band Ripples

(dB)

Stop-Band Ripples

(dB)

Frefc

(Hz) Pc

30 30~80 -25 -80 210 8

30 30~80 -25 -80 1589 81

30 30~80 -25 -80 2968 150

30 30~80 -25 -80 4347 220

30 30~80 -25 -80 5726 290

30 30~80 -25 -80 7105 360

30 30~80 -25 -80 8484 430

30 30~80 -25 -80 9863 499

30 30~80 -25 -80 11242 569

30 30~80 -25 -80 12621 639

30 30~80 -25 -80 14000 709

Table 7.2. Summary of the reference filters bank parameters, implemented for the ARCD and the ARCR.

In the case of the ARRD and the ARRR techniques, a low-pass reference FIR filter is

implemented by employing the Parks-McClellan algorithm. As x(t) is band limited up to 1000 Hz,

so by following the criterion given by Expression 7.16, Fref is chosen equal to 2500 Hz for this

example. The reference filter parameters are summarized in Table 7.3. Where, P is the reference

filter order.

Cut-off Freq

(Hz)

Transition Band

(Hz)

Pass-Band Ripples

(dB)

Stop-Band Ripples

(dB)

Fref

(Hz) P

30 30~80 -25 -80 2500 127

Table 7.3. Summary of the reference filter parameters, implemented for the ARRD and the ARRR.

The non-uniformly sampled signal obtained at the AADC output is selected and windowed by the

ASA. In order to apply the ASA, the reference window length Lref is chosen equal to 1 second for

this example, it satisfies the boundary conditions for this example (cf. Section 5.1.1). The given

Lref delivers Nmax=14000 samples (cf. Equation 5.7). The ASA delivers three selected

windows for the whole x(t) span of 20 seconds, which are shown on the right part of Figure 7.13.

The selected windows parameters are summarized in Table 7.4.

Wi Li

(Seconds)

Ni

(Samples)

Fsi

(Hz)

1st 0.49 3000 6000

2nd 0.99 1083 1083

3rd 0.99 464 464

Table 7.4. Summary of the selected windows parameters.

In the case of the ARCD and the ARCR techniques, a reference filter is chosen for each selected

window, depending upon the effective value of Fsi (cf. Section 7.3.2). The chosen values of Frefc

and the calculated values of Frsi, di, Nri and PP

i for the ARCD and the ARCR techniques are

Saeed Mian Qaisar Grenoble INP 107

Page 126: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

summarized in Tables 7.5 and 7.6 respectively. The procedure of calculating these values is clear

from Figures 7.6 and 7.8 respectively.

Wi Fsi

(Hz)

Frefc

(Hz)

Frsi

(Hz)

Nri

(Samples)

Di Pi

1st 6000 7105 7105 3552 1 360

2nd 1083 1589 1589 1589 1 81

3rd 464 1589 530 530 3 26

Table 7.5. Values of Frefc, Frsi, Nri, Di, and PP

i for each selected window in the ARCD technique.

Wi Fsi

(Hz)

Frefc

(Hz)

Frsi

(Hz)

Nri

(Samples)

di Pi

1st 6000 7105 6000 3000 1.2 304

2nd 1083 1589 1083 1083 1.5 55

3rd 464 1589 464 464 3.4 23

Table 7.6. Values of Frefc, Frsi, Nri, di, and PP

i for each selected window in the ARCR technique.

Tables 7.5 and 7.6 show that for the ARCD and the ARCR techniques, the 2nd filtering case

remains valid for all selected windows with the fractional di. Hence, for the ARCD technique,

Frsi is increased for each selected window, in order to achieve the integral decimation of hck.

Contrary, in the ARCR technique, Frsi remains equal to Fsi and hck is fractionally decimated for

Wi.

In case of the ARRD and the ARRR techniques, the reference filter remains unique for each

selected window (cf. Section 7.3.4). The calculated values of Frsi, di, Nri and Pi for the ARRD

and the ARRR techniques are summarized in Tables 7.7 and 7.8 respectively. The procedures of

calculating these values are clear from Figures 7.10 and 7.12 respectively.

Wi Fsi

(Hz)

Fref

(Hz)

Frsi

(Hz)

Nri

(Samples)

Di Pi

1st 6000 2500 2500 1250 1 127

2nd 1083 2500 1250 1250 2 64

3rd 464 2500 500 500 5 26

Table 7.7. Values of Frsi, Nri, Di, and PP

i for each selected window in the ARRD technique.

Wi Fsi

(Hz)

Fref

(Hz)

Frsi

(Hz)

Nri

(Samples)

di Pi

1st 6000 2500 2500 1250 1 127

2nd 1083 2500 1083 1083 2.3 54

3rd 464 2500 464 464 5.4 24

Table 7.8. Values of Frsi, Nri, di, and PP

i for each selected window in the ARRR technique.

Saeed Mian Qaisar Grenoble INP 108

Page 127: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Tables 7.7 and 7.8 show that for the ARRD and the ARRR techniques, the first selected window

is an example of Fsi Fref, so it is tackled similarly by both techniques. In this case, Frs1=Fref is

chosen and hk remains unaltered. For W2 and W3 the 2nd filtering case holds with fractional di.

Hence, in the ARRD technique Frs2 and Frs3 are increased to achieve integral D2 and D3.

Whereas, in the ARRR technique Frs2 and Frs3 remain equal to Fs2 and Fs3 and hk is decimated

and scaled with the fractional d2 and d3.

Tables 7.4 to 7.8 jointly exhibit the interesting features of the proposed filtering techniques,

which are achieved by a smart combination of the non-uniform and the uniform signal processing

tools (cf. Figure 7.4). Fsi represents the sampling frequency adaptation by following the local

variations of x(t). Ni shows that the relevant signal parts are locally over-sampled in time with

respect to their local bandwidths. Hence, there is no aliasing problem even if Fsi is less than the

overall x(t) Nyquist frequency [45-47]. Frsi shows adaptation of the resampling frequency for

each selected window. This adaptation further adds to the computational gains by avoiding the

unnecessary interpolations during the resampling process. Moreover, Nri shows how the

adjustment of Frsi avoids the processing of unnecessary samples during the post filtering process.

Pi represents the adaptation of reference filter order for Wi. It enhances the proposed techniques

computational gains by reducing the number of operations to deliver a filtered signal. Li exhibits

the dynamic feature of the ASA, which is to correlate Lref with the signal activity laying in it [44].

Let us compare the above obtained results with what is done in the corresponding classical case.

As x(t) is band limited up to 1000 Hz, so if in the classical case, Fs=Fref=2500 Hz is chosen. Then

it satisfies the Nyquist sampling criterion. The filter remains time-invariant in this approach, so it

has to be designed for the worst case. For the chosen Fs and the same design parameters,

summarized in Table 7.3, Parks-McClellan design algorithm delivers a 127th order filter. Hence,

It makes N=20×2500=50000 samples to process with the 127th order FIR filter. On the other

hand, in the proposed techniques the total number of resampled data points is much lower 5671,

4547, 3000 and 2797 for the ARCD, the ARCR, the ARRD and the ARRR techniques

respectively. Moreover, the local filters orders in W2 and W3 are also lower than 127. It promises

a drastic computational efficiency of the proposed techniques compared to the classical one. A

detailed complexity comparison is made in the following section.

7.5 Computational Complexity

In this section the computational complexities of the proposed filtering techniques are deduced. A

complexity comparison of each proposed technique with the classical one is made. Moreover, a

comparison among different proposed techniques is also performed.

7.5.1 Complexity of the Classical FIR Filtering

In the classical case, Fs and P remain time-invariant regardless of the input signal local

variations. Therefore, they are chosen for the worst case. It is known that a P order FIR filter

computes P multiplications and P additions to deliver an output. If N is the number of samples

then the total computational complexity C can be calculated by employing Equation 7.22.

tionsMultiplicaAdditions

NPNPC .. ! (7.22)

Saeed Mian Qaisar Grenoble INP 109

Page 128: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

In the proposed filtering techniques, the sampling frequency and the filter order both are not fixed

and are adapted for each selected window. It is done in accordance to the input signal local

variations. This adaptation process locally requires some extra operations for each selected

window. The computational complexities of the proposed techniques are calculated as follow.

7.5.2 Complexity of the ARCD Technique

In the ARCD technique, the choice of a reference filter hck for Wi is made. It requires Q

comparisons for the worst case. Here, Q is the length of reference frequencies set Fref. The

filtering case selection requires one comparison between Frefc and Frsi. The data resampling

operation is also required before the filtering stage. In the 1st filtering case, Fsi=Frefc holds, hence

the resampling is performed at the original Fsi. Contrary, in the 2nd filtering case, depending upon

the value of di the resampling is performed at the same or an increased value of Fsi (cf. Figure

7.6).

In the studied case, the resampling is performed by employing the NNRI (Nearest Neighbour

Resampling Interpolation). The NNRI is chosen because of its simplicity, as it employs only one

non-uniform observation for each resampled one. Thus, it is efficient in terms of the

computational complexity. Moreover, it provides an unbiased estimate of the original signal

variance. Due to this reason, it is also known as a robust interpolation method [121, 122]. The

detailed reasons of inclination towards NNRI are discussed in Section 6.2.1.2. The NNRI is

performed as follow.

For each interpolation instant trn, the interval of non-uniform samples [tn, tn+1], within which trn

lays is determined. Then the distance of trn to each tn and tn+1 is computed and a comparison

among the computed distances is performed to decide the smaller among them. For Wi, the

complexity of the first step is Ni+Nri comparisons and the complexity of the second step is 2Nri

additions and Nri comparisons. Hence, the NNRI total complexity for Wi becomes Ni+2Nri

comparisons and 2Nri additions.

In the case, when Fsi < Frefc, the decimation of hck is required. In order to do so, di is computed

by performing a division between Frefc and Frsi. Di is calculated by employing a floor operation

on di. A comparison is made between Di and di. In the case, when Di = di, the process of obtaining

hji is simple. It is achieved by picking every (Di)th coefficient from hck. This process is embedded

in the post filtering operation (cf. Equation7.4). This is the reason why its complexity is not taken

into account during the complexity evaluation process. In order to keep the energy of hij coherent

with hck, hij coefficients are scaled with Di, it requires Pi multiplications.

In the case of fractional di, the ARCD decimates hck by employing Di. Frsi is adjusted in order to

keep it coherent with Frefc and it requires one division (cf. Figure 7.6). Finally, a PP

!"!#$!"!#$!!! "!!! #$!"!#$tionsMultiplica

ii

Additions

iiI

isComparison

ii

FloorDivisions

ARCD NrPPNrQNrNC """#" !$!

21211

i order filter

performs Pi.Nri multiplications and Pi.Nri additions for Wi. The combine computational

complexity for the ARCD technique C is given by Equation 7.23. ARCD

% & % & % & (7.23)

Saeed Mian Qaisar Grenoble INP 110

Page 129: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Here, i = 1,2,3,…..,I represents the selected windows index. ! and " are the multiplying factors. !

is 0 for the case when Fsi=Frefc and it is 1 otherwise. " is 0 for the case when di=Di and it is 1

otherwise.

7.5.3 Complexity of the ARCR Technique

The operation cost between the ARCD and the ARCR techniques is common except for the

fractional di (cf. Figures 7.6 and 7.8). In the case of ARCR technique, di is employed as the

decimation factor. The fractional decimation is achieved by resampling hck at Frsi. The

resampling is performed by employing the NNRI, which performs P+2Pi comparisons and 2Pi

additions to deliver hji. The combine computational complexity for the ARCR technique CARCR is

given by Equation 7.24.

% &' ( % & %!"!#$!!! "!!! #$!!!!!!! "!!!!!!! #$tionsMultiplica

ii

Additions

iiiI

isComparison

i

c

ii

FloorDivisions

ARCR NrPPPNrPPQNrNC ""##""" !$!

2221121

&

!"!#$!"!#$!!! "!!! #$!"!#$tionsMultiplica

ii

Additions

iiI

isComparison

ii

FloorDivisions

ARRD NrPPNrNrN """#" !$!

21211

(7.24)

Here, definition of parameters i, ! and " is similar as it is in the case of ARCD technique (cf.

Section 7.5.2).

7.5.4 Complexity of the ARRD Technique

In the ARRD technique, the choice of Frsi requires one comparison between Fref and Fsi. The data

resampling operation is performed with the NNRI. The NNRI performs Ni+2Nri comparisons and

2Nri additions for Wi (cf. Section 7.5.2).

In the case when Fsi< Fref, the decimation of hk is required. In order to do so, di is computed by

performing a division between Fref and Frsi. Di is calculated by employing a floor operation on di.

A comparison is made between Di and di.

In the case when Di=di, the process of obtaining hji is performed by simply picking every (Di)th

coefficient from hk. This operation is merged into the filtering operation (cf. Equation 7.4). This is

the reason why its complexity is not taken into account during the complexity evaluation process.

In the opposite case when Di#di, it is forced to convert the fractional di into an integral one. The

complete procedure is depicted on Figure 7.10. hk decimation is performed by employing Di. In

this case, Frsi is increased in order to keep it coherent with Fref, it requires one division.

Later on hji is scaled with Di. The filter coefficient scalar performs Pi multiplications. Here, Pi

represents the decimated filter order. Finally, a Pi order filter performs Pi.Nri multiplications and

Pi.Nri additions for Wi. The combine computational complexity for the ARRD technique CARRD is

given by Equation 7.25.

&C % & % & % (7.25)

Saeed Mian Qaisar Grenoble INP 111

Page 130: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Here, i = 1,2,3,…..,I represents the selected windows index. ! and " are the multiplying factors. !

is 0 for the case when Fsi $ Fref and it is 1 otherwise. " is 0 for the case when di=Di and it is 1

otherwise.

7.5.5 Complexity of the ARRR Technique

The computational complexity of the ARRR is similar to the ARRD technique, except for the

case when Di#di. In this case, the ARRR technique employs the fractional di as the decimation

factor. The fractional decimation is achieved by resampling hk at Frsi. The resampling is

performed by employing the NNRI, which performs P+2Pi comparisons and 2PP

i additions to

deliver hji. The combine computational complexity for the ARRR technique C is given by

Equation 7.26. ARRR

% &' ( % & % & !"!#$!!! "!!! #$!!!!!! "!!!!!! #$tionsMultiplica

ii

Additions

iiiI

isComparison

iii

FloorDivisions

ARRR NrPPPNrPPNrNC ""##""" !$!

2221121

&

&

(7.26)

Here, definition of parameters i, ! and " is similar as it is in the case of ARRD technique (cf.

Section 7.5.4).

7.5.6 Complexity Comparison of the Proposed Techniques with the

Classical Approach

By comparing Equation 7.22 with Equations 7.23 to 7.26, it is clear that there are uncommon

operations like comparisons, divisions and floor between the classical and the proposed

techniques. In order to make them approximately comparable it is assumed that a comparison has

the same processing cost as that of an addition and a division or a floor has the same processing

cost as that of a multiplication. By following these assumptions, comparisons are merged into the

additions count and divisions plus floors are merged into the multiplications count, during the

complexity evaluation process. Now the Equations 7.23 to 7.26 can be simplified as follow.

% & %!!! "!!! #$!!!!! "!!!!! #$

tionsMultiplica

iii

Additions

iiiI

i

ARCD PNrPQPNrNC #"" !$!

2141

(7.27)

% & % &' ( % &!!! "!!! #$!!!!!!! "!!!!!!! #$

tionsMultiplica

iii

Additions

i

c

iiiI

i

ARCR PNrPQPPPNrNC 214141

!$!

"#" (7.28)

% & %!!! "!!! #$!!!! "!!!! #$

tionsMultiplica

iii

Additions

iiiI

i

ARRD PNrPPNrNC #"" !$!

2141

(7.29)

% & % &' ( % &!!! "!!! #$!!!!!!! "!!!!!!! #$

tionsMultiplica

iii

Additions

iiiiI

i

ARRR PNrPPPPNrNC 214141

!$!

"#" (7.30)

By employing results of the illustrative example, computational comparison of the proposed

techniques with the classical one is made in terms of additions and multiplications count. The

results are calculated for different x(t) time spans and are summarized in the following Tables.

Saeed Mian Qaisar Grenoble INP 112

Page 131: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Time Span

(Seconds)L1 L2 L3 Total x(t) span

Gain in Additions 0.25 2.4 19.4 4.4

Gain in Multiplications 0.25 2.7 23.0 4.5

Table 7.9. Computational gain of the ARCD over the classical one for different time spans of x(t).

Time Span

(Seconds)L1 L2 L3 Total x(t) span

Gain in Additions 0.35 4.9 24.1 6.3

Gain in Multiplications 0.35 5.3 29.7 6.5

Table 7.10. Computational gain of the ARCR over the classical one for different time spans of x(t).

Time Span

(Seconds)L1 L2 L3 Total x(t) span

Gain in Additions 1.9 3.9 23.5 25.9

Gain in Multiplications 2.0 4.5 24.4 26.2

Table 7.11. Computational gain of the ARRD over the classical one for different time spans of x(t).

Time Span

(Seconds)L1 L2 L3 Total x(t) span

Gain in Additions 1.9 5.3 27.3 29.4

Gain in Multiplications 2.0 5.4 29.7 29.8

Table 7.12. Computational gain of the ARRR over the classical one for different time spans of x(t).

The above results show that for W1 the computational cost of the ARCD and the ARCR

techniques is higher than that of the classical approach. The reason of this higher computational

load is the higher values of Frs1 and PP

1 in the ARCD and the ARCR techniques compared to the

classical case (cf. Tables 7.5 and 7.6).

For the ARRD and the ARRR techniques, in the case of W1, the resampling frequency and the

filter order is the same as in the classical case (cf. Tables 7.7 and 7.8). Yet a gain is achieved by

the proposed techniques. It is due to the fact that the ASA correlates the window length to the

signal activity laying in it (0.5 second). However, the classic case computes during the total

duration of Lref =1 second.

For W2 and W3 the ARCD, the ARCR, the ARRD and the ARRR techniques remain efficient

compared to the classical approach. It is achieved by taking benefit of processing lesser number

of samples along with lower filter orders (cf. Tables 7.5-7.8). While treating the whole x(t) span

of 20 seconds, the proposed techniques also take advantage of the idle x(t) parts, which further

improves their gains compared to the classical approach.

Saeed Mian Qaisar Grenoble INP 113

Page 132: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.5.7 Complexity Comparison Among the Proposed Techniques

7.5.7.1 Comparison Between the ARCD and the ARCR Techniques

The difference between the ARCD and the ARCR techniques occurs for the case when

Fsi<Frefc and di is fractional (cf. Figures 7.6 and 7.8). In this case, the ARCD makes an

increment in Frsi in order to keep it coherent with Frefc. Increase in Frsi induces an increment in

Nri and PP

i. Thus, in comparison to the ARCR, this technique increases the computational load of

the post filtering operation, while keeping the decimation process of hc simpler. k

The ARCR performs hck resampling at Frsi. Thus, in comparison to the ARCD, this technique

increases the complexity of the decimation process of hck, while keeping the computational load

of the post filtering process lower than ARCD.

A complexity comparison between the ARCD and the ARCR techniques is made in terms of

additions and multiplications by employing Equations 7.27 and 7.28 respectively. It concludes

that the ARCR remains computationally efficient compared to the ARCD, in terms of additions

and multiplications, as far as conditions given by expressions 7.31 and 7.32 remain true. Here, Nri

and Pi can be different for the ARCD and the ARCR techniques (cf. Tables 7.5 and 7.6).

% &' ( % &' ( ' (ARCR

i

ARCR

ii

ARCD

ii PPPNrPNr 444 ) * (7.31)

% &' ( % &' ( ARCR

ii

ARCD

ii NrPNrP 111 ) (7.32)

For this studied example d1, d2 and d3are fractional ones, thus the ARCD and the ARCR proceed

differently. Conditions 7.31 and 7.32 remain true for W1, W2 and W3. Hence, the gains in additions

and multiplications of the ARCR are higher than those of the ARCD for each selected window

(cf. Tables 7.9 and 7.10).

7.5.7.2 Comparison Between the ARRD and the ARRR Techniques

The main difference between the ARRD and the ARRR techniques occurs for the case when

Fsi<Fref and di is fractional (cf. Figures 7.10 and 7.12). Similar to the ARCD approach, the

ARRD technique makes an increment in Frsi in order to keep it coherent with Fref. On the other

hand, analogous to the ARCR, hk is resampled at Frsi in the ARRR technique.

In continuation to the previous subsection a complexity comparison between the ARRD and the

ARRR techniques is also made in terms of additions and multiplications by employing Equations

7.29 and 7.30 respectively. The ARRR remains computationally efficient compared to the ARRD,

in terms of additions and multiplications, as far as the conditions given by expressions 7.33 and

7.34 remain true.

% &' ( % &' ( ' ( ARRR

i

ARRR

ii

ARRD

ii PPPNrPNr 444 ) * (7.33)

% &' ( % &' ( ARRR

ii

ARRD

ii NrPNrP 111 ) (7.34)

Saeed Mian Qaisar Grenoble INP 114

Page 133: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

In Equations 7.33 and 7.34, Nri and PP

i can be different for the ARRD and the ARRR techniques

(cf. Tables 7.7 and 7.8). For this studied example, d2 and d3are fractional ones, so the ARRD and

the ARRR process W2 and W3 differently. Conditions 7.33 and 7.34 remain true for both selected

windows. Hence, the gains in additions and multiplications of the ARRR are higher than those of

the ARRD for W2 and W3 (cf. Tables 7.11 and 7.12).

From above discussion it is clear that except for very specific situation the ARCR and the ARRR

techniques will always remain less expensive than the ARCD and the ARRD techniques

respectively. They achieve this computational performance by employing the fractional

decimation of hck and hk (cf. Figure 7.8 and 7.12). The fractional decimation may leads towards a

quality compromise of the ARCR and the ARRR techniques respectively compared to the ARCD

and the ARRD ones. This issue is addressed in Section 7.6.

The ARRD and the ARRR achieve superior computational gain compared to the ARCD and the

ARCR ones. It is mainly because of the smart feature of resampling Wi at a reduced Fsi, whenever

Fsi > Fref. Contrary, in the ARCD and the ARCR the feature of Fsi reduction is not available. Wi

is always resampled at the same or an increased value of Fsi, which results to process an

increased number of samples with a higher order filter compared to the ARRD and the ARRR

techniques.

7.6 Processing Error

This section deals with quantifying the approximation and the filtering errors of the proposed

filtering techniques. The proposed methods of computing these errors are detailed in the

following subsections.

7.6.1 Approximation Error

In the proposed techniques, the resampling is performed, which changes properties of the

resampled signal with respect to the original one [121, 122]. The resampled values are

approximated by employing the NNRI algorithm. The approximated values are erroneous due to

two effects: the time-amplitude pairs uncertainties occur due to the AADC finite timer and

threshold levels precisions, plus the interpolation error which occurs in the course of the uniform

resampling process [37, 38, 43, 46]. After these two operations, the Mean Approximation Error

for the ith selected window MAEi can be computed by employing the following Equation.

$!

*!

iNr

n

nni

i xrxoNr

MAE1

.1

(7.35)

Here, xrn is the nth resampled observation, interpolated with respect to the time instant trn. xon is

the original sample value which should be obtained by sampling x(t) at trn. In the studied

example, discussed in Section 7.4, x(t) is analytically known. Thus it is possible to compute its

original samples values at any time instant. It allows us to compute the approximation error by

employing Equation 7.35. The results obtained for each selected window for the ARCD, the

ARCR the ARRD and the ARRR techniques are summarized in Table 7.13.

Saeed Mian Qaisar Grenoble INP 115

Page 134: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Selected Window W1 W2 W3

MAEi for the ARCD (dB) -17.5 -19.1 -20.4

MAEi for the ARCR (dB) -18.9 -19.6 -20.9

MAEi for the ARRD (dB) -18.5 -19.3 -20.8

MAEi for the ARRR (dB) -18.5 -19.6 -20.9

Table 7.13. Mean approximation error for each selected window for the ARCD, the ARCR, the ARRD and

the ARRR techniques.

Table 7.13, shows the approximation error introduced by the proposed techniques. Results show

that this process is accurate enough for a 3-bit AADC. For higher precision applications, the

approximation accuracy can be improved by increasing the AADC time-amplitude resolution

along with the interpolation order [38, 46, 127, 146].

The increased AADC resolution M, will result into an increased number of samples to be

processed and hence causes an increased load. Moreover, the higher order interpolator requires

more operations per approximated observation and so causes an increased complexity. Thus, an

improved accuracy can be achieved at the cost of an increased processing activity. Therefore, by

making a suitable compromise between the accuracy level and the computational load, an

appropriate solution can be devised for a targeted application.

7.6.2 Filtering Error

In the proposed filtering techniques, the online decimation of hck or hk is performed (cf. Figures

7.6, 7.8, 7.10 and 7.12). It can cause the filtering precision degradation. In order to evaluate this

phenomenon on our test signal the following procedure is adapted.

A reference filtered signal is generated. In this case, instead of decimating hck or hk to obtain hji, a

specific filter him is directly designed for Wi. It is done for Frsi by employing the same design

parameters, for which the reference filters bank or the reference filter were designed for the

ARCD and the ARCR or the ARRD and the ARRR techniques respectively (cf. Tables 7.2 and

7.3). The signal activity corresponding to Wi is sampled at Frsi with a high precision classical

ADC. This sampled signal is filtered by employing him. The filtered signal obtained in this way is

used as a reference one for Wi and its comparison is made with results obtained by the proposed

techniques.

Let yn be the nth reference filtered sample and yn^ be the nth filtered sample obtained by one of the

proposed filtering techniques. Then, the mean filtering error for the ith selected window MFEi can

be calculated by employing Equation 7.36.

$!

*!

iNr

i

nni

i yyNr

MFE1

^.1

(7.36)

The mean filtering error for all proposed techniques is calculated, for each x(t) activity by

employing Equation 7.36. The obtained results are summarized in Table 7.14.

Saeed Mian Qaisar Grenoble INP 116

Page 135: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Selected Window W1 W2 W3

MFEi for the ARCD (dB) -35.1 -34.4 -27.7

MFEi for the ARCR (dB) -30.3 -26.7 -20.2

MFEi for the ARRD (dB) -36.2 -30.5 -16.1

MFEi for the ARRR (dB) -36.2 -23.5 -10.6

Table 7.14. Mean filtering error for each selected window for the ARCD, the ARCR, the ARRD and the

ARRR techniques.

Table 7.14 shows that the online decimation of hck or hk in the proposed techniques causes a loss

of the desired filtering quality. Indeed, the filtering error increases with the increase in di. The

measure of this error can be used to decide an upper bound on di (by performing an offline

calculation), for which the decimated and the scaled filters provide results with an acceptable

level of accuracy. The level of accuracy is application dependent. Moreover, for high precision

applications, an appropriate filter can be online calculated for each selected window at the cost of

an increased computational load. The process is clear from generating the reference filtered signal

yn, discussed above.

It is clear that MFEi for the ARCR and the ARRR techniques is higher than that of the ARCD and

the ARRD techniques respectively (cf. Table 7.14). It is due to the fractional decimation of hck or

hk, in the ARCR or the ARRR. It makes to employ the approximated coefficients of hck or hk for

filtering the resampled data, laying in Wi, which returns into an increased filtering error for the

ARCR and the ARRR techniques compared to the ARCD and the ARRD ones respectively.

Similar to section 7.6.1, this filter resampling error in the ARCR and the ARRR techniques can

be reduced to a certain extent, by employing a higher order interpolator [127, 133]. It concludes

that a certain increase in the accuracy can be achieved at a certain loss of the processing

efficiency.

The above results show that a higher di results into an increased MFEi. It is because of the fact

that the deviation of response between the reference filter and that of the decimated filter for Wi is

directly proportional to di. This statement is obvious from MFE3 values of the proposed

techniques. The employed d3 in the ARCD and the ARCR is lower than that one employed in the

ARRD and the ARRR. Hence, MFE3 of the ARCD and the ARCR is lower than that of the ARRD

and the ARRR. It is because of an available range of the reference filters, employed in the ARCD

and the ARCR in contrast to a single reference filter, employed in the ARRD and the ARRR.

7.7 Enhanced Adaptive Rate Filtering Techniques

The above discussion shows pros and cons of the different proposed filtering techniques. It is

clear that the ARRD and the ARRR remain computationally efficient compared to the ARCD and

the ARCR (cf. Section 7.5.7.2). A shortcoming of the ARRD and the ARRR is that for a

wideband signal they can render into a higher filtering error compared to the ARCD and the

ARCR (cf. Section 7.6.2). In fact, for a wideband input Fsmin and Fsmax are far apart. Since, in the

case of the ARRD and the ARRR a single reference filter is employed (cf. Sections 7.3.4 and

7.3.5). Hence, when Fsi is approaching to Fsmin, it results into a higher di and therefore an

increased MFEi. This error can be reduced by employing an appropriate bank of reference filters

Saeed Mian Qaisar Grenoble INP 117

Page 136: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

like one employed in the ARCD and the ARCR techniques. It hints that by collectively

employing goods of different proposed approaches, improved adaptive rate filtering solutions can

be achieved. The procedure of realizing this idea is detailed as follow.

7.7.1 EARD (Enhanced Activity Reduction by Filter Decimation)

Technique

The EARD is a combination of the ARCD and the ARRD techniques. The idea behind is to

achieve a solution which remains computationally efficient like the ARRD while providing better

quality results like the ARCD.

In the EARD, a bank of reference FIR filters is designed for a set of reference sampling

frequencies Fref. Fref and Fsmin define the upper and the lower bounds on Fref respectively. Here,

Fsmin and Fref are clear from Equation 7.9 and expression 7.16 respectively. The idea is quite

similar to the ARCD, but a difference exists on choice of the upper bound on Fref. In the ARCD,

Fsmax defines the upper bound instead of Fref (cf. Section 7.3.2). Depending on the targeted

application, an appropriate rule can be devised for distributing different Fref elements within the

range [Fsmin; Fref]. In the proposed case, they are placed uniformly.

If Q is the length of Fref, then the process of computing the complete set is clear from the

following Equation.

+ ,refFQFsFsFsFsFref !-* - - ! ).1(,.....,.2,, minminminmin (7.37)

Here, is an offset and its value can be calculated by using Equation 7.38.

1

min

*

*!-

Q

FsFref (7.38)

During online processing an appropriate reference filter is chosen for Wi, the choice is made on

the basis of Fref and the effective value of Fsi. If Fsi$Fref¸ then the reference filter which is

designed for Fref is used for Wi. Otherwise, if Fsi<Fref¸ then the reference filter whose

corresponding value of Frefc is closest and greater or equal to Fsi is chosen for Wi.

The choice of Frsi is a function of Frefc and Fsi. The process of deciding Frsi and keeping it

coherent with Frefc is detailed as follow.

7.7.1.1 1st Filtering Case

This case holds, if Fsi Fref. Here, Frsi is chosen equal to Fref and hck remains unchanged. This

choice of Frsi makes to resample Wi closer to the Nyquist rate, so avoiding unnecessary

interpolations during the data resampling process. Hence, it makes the EARD computationally

efficient compared to the ARCD, which always resample Wi at the same or an increased value of

Fsi (cf. Figure 7.6).

Saeed Mian Qaisar Grenoble INP 118

Page 137: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.7.1.2 2nd

Filtering Case

In this case, the condition Fref>Frefc=Fsi holds. Here, Frsi is chosen equal to Frefc and hck

remains unchanged.

7.7.1.3 3rd

Filtering Case

In this case, the condition Fref>Frefc>Fsi holds. Here, Frsi is chosen equal to Fsi and hck is online

decimated in order to match Frefc to Frsi.

Similar to the ARCD, in the EARD, a decimation factor di for Wi is online calculated by

employing Equation 7.13, which is later on employed for decimating hck. di can be specific for

each selected window depending upon Frsi and Frefc. Di = floor(di) is computed, in order to

determine weather di is integral or float. if (Di=di) holds, then hck is decimated with Di to deliver

hij, the process is clear from Equation 7.14. if (Di # di), then hck is again decimated by employing

Di. It is achieved by adjusting Frsi as: Frsi = Fsi /D

i. It keeps Frsi coherent with Frefc. Similar to

the ARCD and the ARRD, in the EARD the effect of online hck decimation is compensated by

weighting the decimated filter coefficients with Di. The process is clear from Equations 7.15 and

7.21. The complete procedure of obtaining Frsi and hji for the EARD technique is described on

Figure 7.14.

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsihji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsihji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

Figure 7.14. Flowchart of the EARD technique.

7.7.2 EARR (Enhanced Activity Reduction by Filter Resampling)

Technique

Similar to the EARD, the EARR technique is a combination of the ARCR and the ARRR

techniques. The filtering procedure for the EARR is similar to that of the EARD, except for the

fractional di. In this case, hck is decimated with the fractional di. The fractional decimation is

achieved by resampling hck at Frsi. The reference filter decimation effect is compensated by

scaling the decimated filter impulse response with di. The complete procedure of obtaining Frsi

and hji for the EARR technique is illustrated on Figure 7.15.

Saeed Mian Qaisar Grenoble INP 119

Page 138: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

Figure 7.15. Flowchart of the EARR technique.

7.7.3 ARDI (Activity Reduction by Filter Decimation/Interpolation)

Technique The ARDI is a modification of the EARR technique. It further relaxes the choice of Frefc for Wi.

The ARDI differs from the EARR in two aspects. Firstly, for the choice of Frefc when Fsi< Fref

and secondly, for the filtering case when Fref >Frefc # Fsi.

In the ARDI, if Fsi <Fref¸ then the reference filter whose corresponding value of Frefc is closest to

Fsi is chosen for Wi. For the filtering case when Fref>Frefc # Fsi, Frsi is chosen equal to Fsi and

hck is online adjusted in order to match Frefc to Frsi. If Fsi<Frefc, then hck is online decimated for

Wi, by employing the decimation factor di. The process of hck decimation is similar to the one

employed in the EARR technique. Contrary, if Fsi >Frefc, then hck is online upsampled for Wi. It

is achieved by resampling hck at Frsi. The upsampling factor for Wi can be computed by

employing the following Equation.

c

ii

Fref

Frsu ! (7.39)

The upsampling varies the energy of upsampled filter hij compared to hck. This effect is

compensated by scaling coefficients of the upsampled filter with ui. The process is clear from

Equation 7.40.

iu

ki

i

j hcu

h .1

! (7.40)

The complete procedure of obtaining Frsi and hji for the ARDI technique is described on Figure

7.16.

Saeed Mian Qaisar Grenoble INP 120

Page 139: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck

Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

Fsi < Frefc

ui = Frsi / Frefc

IF Yes IF No

hji =1/ui . hj

i

hji = Resample(hck @ Frsi)

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck

Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

Fsi < Frefc

ui = Frsi / Frefc

IF Yes IF No

hji =1/ui . hj

i

hji = Resample(hck @ Frsi)

Figure 7.16. Flowchart of the ARDI technique.

In order to illustrate performances of the EARD, the EARR and the ARDI techniques the same

example described in Section 7.4 is employed. Hence, Fsmin and Fref become 210 Hz and 2500 Hz.

A bank of eleven low pass filters is implemented, thus =229 Hz becomes in this case (cf.

Equation 7.38). Parameters of the reference FIR filters are summarized in the following Table.

Cut-off Freq

(Hz)

Transition Band

(Hz)

Pass-Band Ripples

(dB)

Stop-Band Ripples

(dB)

Frefc

(Hz) Pc

30 30~80 -25 -80 210 8

30 30~80 -25 -80 439 21

30 30~80 -25 -80 668 33

30 30~80 -25 -80 897 45

30 30~80 -25 -80 1126 57

30 30~80 -25 -80 1355 69

30 30~80 -25 -80 1584 80

30 30~80 -25 -80 1813 92

30 30~80 -25 -80 2042 104

30 30~80 -25 -80 2271 115

30 30~80 -25 -80 2500 127

Table 7.15. Summary of the reference filters bank parameters, implemented for the EARD, the EARR and

the ARDI techniques.

The values of Frefc, Frsi, di, Nri and Pi for the EARD, the EARR and the ARDI techniques are

summarized in Tables 7.16, 7.17 and 7.18 respectively.

Saeed Mian Qaisar Grenoble INP 121

Page 140: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Wi Fsi

(Hz)

Frefc

(Hz)

Frsi

(Hz)

Nri

(Samples)

Di Pi

1st 6000 2500 2500 1250 1.0 127

2nd 1083 1126 1126 1126 1.0 57

3rd 464 668 668 668 1.0 33

Table 7.16. Values of Frefc, Frsi, Nri, Di and PP

i for each selected window in the EARD technique.

Wi Fsi

(Hz)

Frefc

(Hz)

Frsi

(Hz)

Nri

(Samples)

di Pi

1st 6000 2500 2500 1250 1.0 127

2nd 1083 1126 1083 1083 1.1 55

3rd 464 668 464 464 1.4 23

Table 7.17. Values of Frefc, Frsi, Nri, di and PP

i for each selected window in the EARR technique.

Wi Fsi

(Hz)

Frefc

(Hz)

Frsi

(Hz)

Nri

(Samples)

di / ui Pi

1st 6000 2500 2500 1250 1.0 127

2nd 1083 1126 1083 1083 1.1 55

3rd 464 439 464 464 1.1 23

Table 7.18. Values of Frefc, Frsi, Nri, di/ ui and PP

!"!#$!"!#$!!!! "!!!! #$!"!#$tionsMultiplica

ii

Additions

iiI

isComparison

ii

FloorDivisions

EARD NrPPNrQNrNC """#" !$!

22211

i for each selected window in the ARDI technique.

7.7.4 Complexity of the EARD, the EARR and the ARDI Techniques

In the EARD, the choice of a reference filter hck is made for Wi. In the worst case, it requires Q

comparisons. Here, Q is the length of reference frequencies set Fref. The filtering case selection

requires two comparisons (cf. Figure 7.14). The data resampling operation is performed by

employing the NNRI (Nearest Neighbour Resampling Interpolation). It requires Ni+2Nri

comparisons and 2Nri additions for Wi.

In the case when Fsi<Frefc, the decimation of hck is required. This process is achieved in the

similar way as in the ARCD and the ARRD techniques. Following it, the combine computational

complexity for the EARD technique CEARD is given by Equation 7.41.

% & % & % & (7.41)

The operation cost of the EARR is analogous to the EARD, except for the fractional di. In this

case, the EARR achieves fractional decimation of hck by employing the same procedure adopted

in the ARCR technique. Hence, the combine computational complexity for the EARR technique

CEARR is given by Equation 7.42.

% &' ( % & %!"!#$!!! "!!! #$!!!!!!! "!!!!!!! #$tionsMultiplica

ii

Additions

iiiI

isComparison

i

c

ii

FloorDivisions

EARR NrPPPNrPPQNrNC ""##""" !$!

2221221

&

(7.42)

Saeed Mian Qaisar Grenoble INP 122

Page 141: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

In Equations 7.41 and 7.42, i = 1,2,3,…..,I, represents the selected windows index. ! and " are the

multiplying factors. ! is 1 for the case when Fref >Fsi<Frefc and it is 0 otherwise. " is 0 for the

case when di=Di and it is 1 otherwise.

In the ARDI, the filtering case selection requires three comparisons (cf. Figure 7.16). In the case

when Fref >Fsi < Frefc, hck is online decimated for Wi. The employed decimation procedure is the

similar one adopted in the EARR. Contrary, in the case when Fref >Fsi >Frefc, hck is online

upsampled for Wi. The NNRI is employed for the upsampling purpose, which performs P+2.Pi

comparisons and 2.PP

i additions to deliver hij. The remaining operations between the ARDI and

the EARR techniques are similar. Hence, the combine computational complexity for the ARDI

technique C is given by Equation 7.43. ARDI

% &' ( % & %!"!#$!!! "!!! #$!!!!!!! "!!!!!!! #$tionsMultiplica

ii

Additions

iiiI

isComparison

i

c

ii

FloorDivisions

ARDI NrPPPNrPPQNrNC ""##""" !$!

2221321

&

&

(7.43)

In Equations 7.43, i = 1,2,3,…..,I represents the selected windows index. ! and " are the

multiplying factors. ! is 1 for the case when Fref >Fsi#Frefc and it is 0 otherwise. " is 0 for the

case when di=Di and it is 1 either for di#Di or for Fsi >Frefc. Note that both cases di#Di and

Fsi>Frefc are mutually exclusive and validity of any one among them results in "=1 (cf. Figure

7.16).

7.7.5 Complexity Comparison with the Classical Approach

By following the assumptions made in Section 7.5.6, operations count of the EARD, the EARR

and the ARDI are unified in terms of additions and multiplications. It results into simplified

complexities of these techniques, which are given by Equations 7.44, 7.45 and 7.46 respectively.

It is done in order to make an approximate complexity comparison of the enhanced adaptive rate

filtering techniques with the classical one.

% & %!!! "!!! #$!!!!! "!!!!! #$

tionsMultiplica

iii

Additions

iiiI

i

EARD PNrPQPNrNC #"" !$!

2241

(7.44)

% & % &' ( % &!!! "!!! #$!!!!!!!! "!!!!!!!! #$

tionsMultiplica

iii

Additions

i

c

iiiI

i

EARR PNrPQPPPNrNC 224141

!$!

"#" (7.45)

% & % &' ( % &!!! "!!! #$!!!!!!!! "!!!!!!!! #$

tionsMultiplica

iii

Additions

i

c

iiiI

i

ARDI PNrPQPPPNrNC 234141

!$!

"#" (7.46)

Computational gains of the EARD, the EARR and the ARDI over the classical one are computed

by employing results summarized in Tables 7.16, 7.17 and 7.18. The results are computed for

different x(t) time spans and are summarized in the following Tables.

Saeed Mian Qaisar Grenoble INP 123

Page 142: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Time Span (Seconds) L1 L2 L3 Total x(t) span

Gain in Additions 1.9 4.5 15.4 24.8

Gain in Multiplications 2.0 5.1 17.6 25.7

Table 7.19. Computational gain of the EARD over the classical one for different time spans of x(t).

Time Span (Seconds) L1 L2 L3 Total x(t) span

Gain in Additions 1.9 5.4 27.5 29.7

Gain in Multiplications 2.0 5.8 30.5 30.1

Table 7.20. Computational gain of the EARR over the classical one for different time spans of x(t).

Time Span (Seconds) L1 L2 L3 Total x(t) span

Gain in Additions 1.9 5.4 27.8 29.8

Gain in Multiplications 2.0 5.8 30.5 30.1

Table 7.21. Computational gain of the ARDI over the classical one for different time spans of x(t).

The above results confirm that the EARD, the EARR and the ARDI lead towards a drastic

reduction in the number of operations compared to the classical filtering approach. This reduction

in operations is achieved by adapting the sampling frequency and the filter order according to the

input signal local variations.

7.7.6 Complexity Comparison with the ARCD, the ARCR, the ARRD and

the ARRR techniques

In order to determine the computational enhancement of the EARD, the EARR and the ARDI,

their processing cost is compared with that of the ARCD, the ARCR, the ARRD and the ARRR.

The EARD is a combination of the ARCD and the ARRD techniques. Hence, it is more relevant

to make a complexity comparison between the EARD, the ARCD and the ARRD. For the studied

example, the EARD gains computational efficiency compared to the ARCD (cf. Tables 7.9 and

7.19). It is done by resampling Wi at reduced Fsi, every time when Fsi >Fref holds (cf. Figure

7.14). The EARD remains slightly less efficient compared to the ARRD. It is because of the

higher Frs3, chosen in the EARD case compared to the ARRD one (cf. Tables 7.7 and 7.16).

Similarly a comparison between the EARR, the ARCR and the ARRR is made. The EARR

remains efficient compared to the ARCR technique (cf. Tables 7.10 and 7.20). The reason behind

is the reduced Frs1, employed in the EARR case compared to the ARCR one (cf. Tables 7.6 and

7.17). The EARR also achieves processing gain over the ARRR (cf. Tables 7.12 and 7.20). It is

because of employing the lower order reference filters (h2k and h3k) in the EARR compared to the

ARRR ones (cf. Tables 7.8 and 7.17).

The ARDI is quite similar to the EARR. For the studied example, it remains more efficient than

the EARR, in terms of additions count (cf. Tables 7.20 and 7.21). The reason is that the employed

h3k in the ARDI is lower compared to the EARR one (cf. Tables 7.17 and 7.18).

Saeed Mian Qaisar Grenoble INP 124

Page 143: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.7.7 Complexity Comparison Among the EARD, the EARR and the

ARDI techniques

In continuation to Sections 7.5.7.1 and 7.5.7.2, a complexity comparison among the EARD, the

EARR and the ARDI techniques is also made. It is done by employing Equations 7.44 to 7.46.

The EARR remains computationally efficient compared to the EARD, in terms of additions and

multiplications, as far as conditions given by expressions 7.47 and 7.48 remain true. Note that Nri

and Pi can be different for the EARD and the EARR techniques (cf. Tables 7.16 and 7.17).

% &' ( % &' ( ' (EARR

i

cEARR

ii

EARD

ii PPPNrPNr 444 ) * (7.47)

% &' ( % &' (EARR

ii

EARD

ii NrPNrP 111 ) (7.48)

The ARDI is a little modification of the EARR. From Equations 7.45 and 7.46, it is clear that

multiplications count remains same between them. However, the additions count can be different

because of the case: Fref >Fsi!Frefc (cf. Figures 7.15 and 7.16). The ARDI requires less additions

than the EARR as far as the condition 7.49 remains true.

' ( ' ( ARDIcEARRc PP ) (7.49)

Here, Pc is the order of chosen reference filter for Wi. For the studied signal, conditions 7.47 and

7.48 remain true for W2 and W3. It results into the higher gains in additions and multiplications of

the EARR compared to the EARD (cf. Tables 7.19 and 7.20). Similarly the condition 7.49

remains true for W3, which leads into a higher gain in additions of the ARDI compared to the

EARR (cf. Tables 7.20 and 7.21).

From above discussion it is clear that the EARR and the ARDI perform reduced number of

operations than the EARD. It is achieved with the fractional adjustment of hck. It can result into a

quality compromise of the EARR and the ARDI compared to the EARD. This issue is addressed

in the following.

7.7.8 Processing Error of the EARD, the EARR and the ARDI techniques

7.7.8.1 Approximation Error

The mean approximation error MAEi of the EARD, the EARR and the ARDI techniques is

computed by employing Equation 7.35. The results obtained for each selected window are

summarized in Table 7.22.

Selected Window W1 W2 W3

MAEi for the EARD (dB) -18.5 -19.4 -20.5

MAEi for the EARR (dB) -18.5 -19.6 -20.9

MAEi for the ARDI (dB) -18.5 -19.6 -20.9

Table 7.22. Mean approximation error of each selected window for the EARD, the EARR and the ARDI.

Saeed Mian Qaisar Grenoble INP 125

Page 144: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

Values of Frsi for the EARD are closer to those of the ARRD (cf. Tables 7.7 and 7.16). That is

why their MAEi values are quite similar (cf. Tables 7.13 and 7.22). On the other hand, values of

Frsi for the EARR and ARDI techniques are the same employed in the ARRR (cf. Tables 7.8,

7.17 and 7.18), it results into the same MAEi for each among them (cf. Tables 7.13 and 7.22).

7.7.8.2 Filtering Error

The mean filtering error for the EARD, the EARR and the ARDI is calculated for each x(t)

activity by employing Equation 7.36. The obtained results are summarized in Table 7.23.

Selected Window W1 W2 W3

MFEi for the EARD (dB) -36.2 -36.3 -35.7

MFEi for the EARR (dB) -36.2 -31.4 -27.4

MFEi for the ARDI (dB) -36.2 -31.4 -31.2

Table 7.23. Mean filtering error of each selected window for the EARD, the EARR and the ARDI.

MFEi for the EARD, the EARR and the ARDI is lower than the ARRD and the ARRR (cf. Tables

7.14 and 7.23). It is achieved by employing an appropriate range of reference filters in place of a

unique one, employed in the ARRD and the ARRR (cf. Sections 7.3.4 and 7.7.1).

Although the same Q =11 is employed in the ARCD and the ARCR, yet a comparative MFEi

reduction is achieved in the EARD, the EARR and the ARDI. It is because of choosing

[Fsmin; Fref] as a width of Fref, instead of [Fsmin; Fsmax], chosen in the ARCD and the ARCR. As

Fref < Fsmax, hence for the same Q, the value of remains lower in the EARD, the EARR and the

ARDI compared to the ARCD and the ARCR one (cf. Equations 7.11 and 7.38). It increases the

probability of choosing a Frefc closer to Fsi and hence of reducing di, which in consequence

reduces MFEi, in the case of the EARD, the EARR and the ARDI.

7.8 Adaptive Rate Filter Architecture

A system level architecture common to the proposed adaptive rate filtering techniques is shown in

Figure 7.17. Different variables on this Figure are defined as follow.

x(t) : Band-pass filtered input analog signal.

(tn, xn) : Time-amplitude pair of the nth non-uniform sample captured by the LCADC.

Frsi : Resampling frequency for the ith selected window.

xrn : nth resamplede data point.

yn : nth filtered output.

SP : Control signal, shows that a new sample is captured by the LCADC.

Req1 : Control signal, trigger the data write process between the LCADC and Buffer1.

Req2 : Control signal, trigger the data read process by the ASA and parameters adaptor block

from Buffer1.

SP1 : Control signal, shows that a sample is processed by the ASA and parameters adaptor

block.

NW1 : Control signal, shows that a new window is selected by the ASA.

Saeed Mian Qaisar Grenoble INP 126

Page 145: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

FCS : Control signal, shows that which filtering case is selected for the ith selected window.

NW2 : Control signal, shows that the previous Frsi is properly transferred to the Resampler.

Req3 : Control signal, trigger the Frsi transfer process between the ASA, the Resampler and

the Filters bank, depending upon the state of the FCS, the NW2 and the NW3 signals.

SP2 : Control signal, shows that the previous sample is processed by the Resampler.

Req4 : Control signal, trigger the data read process by the the Resampler from the Buffer1.

Req5 : Control signal, trigger the data write process between the Resampler and the Buffer2.

SP3 : Control signal, shows that the previous sample is processed by the Filters bank.

Req6 : Control signal, trigger the data read process by the Filters bank from the Buffer2.

NW3 : Control signal, shows that the previous Frsi is properly transferred to the Filters bank.

P-w-1 : Data write pointer, manages to write the new data from the LCADC to the Buffer1.

P-r-1 : Data read pointer, manages to read the new data by the ASA from the Buffer1.

P-r-2 : Data read pointer, manages to read the new data by the Resampler from the Buffer1.

P-w-3 : Data write pointer, manages to write the new data from the Resampler to the Buffer2.

P-r-3 : Data read pointer, manages to read the new data by the Filters bank from the Buffer2.

LCADC SP

Buffer1

ASA

&

Parameters

Adaptor

Resampler

Buffer2

Reference Filters

Bank

F

S

M

P-w-1

Req1

Req5SP3Req6

Req2

SP1

FCS

Req3

NW2

x(t)

(tn, xn)

(ts, xs)

tn

Frsi

xrn

xrn

ROM

RO

M

yn

Frsi

P-r-2

NW1

P-r-1

P-w-3

P-r-3

Req4

SP2

NW3 Req3

LCADC SP

Buffer1

ASA

&

Parameters

Adaptor

Resampler

Buffer2

Reference Filters

Bank

F

S

M

P-w-1

Req1

Req5SP3Req6

Req2

SP1

FCS

Req3

NW2

x(t)

(tn, xn)

(ts, xs)

tn

Frsi

xrn

xrn

ROM

RO

M

yn

Frsi

P-r-2

NW1

P-r-1

P-w-3

P-r-3

Req4

SP2

NW3 Req3

Figure 7.17. System level architecture.

The proposed architecture can be easily optimized for any of the proposed filtering solutions. Its

implementation and circuit level performance evaluation is a future task.

Saeed Mian Qaisar Grenoble INP 127

Page 146: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 7 Signal Driven Adaptive Rate Filtering

7.9 Conclusion

Filtering is a basic operation, required almost in all signal processing systems. Basic concepts of

the filtering operation have been reviewed. The classical filtering is a time-invariant process,

which results into an increased processing load, while treating the time-varying signals. This

drawback can be resolved up to a certain extent by employing the multirate filtering techniques

[128-132]. The multirate filtering principle has been described.

Following the idea of multirate filtering approach, novel adaptive rate filtering techniques have

been devised. These are especially well suited for the low activity sporadic signals. For the

proposed filtering techniques, a reference filters bank or a single reference filter is offline

computed by taking into account the input signal statistical characteristics and the application

requirements. A complete procedure of obtaining the resampling frequency Frsi and the

decimated filter coefficients hji for Wi, is described for each proposed technique. The

computational complexities of the proposed techniques are deduced and compared with the

classical one. It is shown that the proposed techniques result into a more than one order

magnitude gain in terms of additions and multiplications over the classical one. It is achieved due

to the joint benefits of the LCADC, the ASA and the resampling as they allow the online

adaptation of parameters (Fsi, Frsi, Ni, Nri, di and Pi) by exploiting the input signal local

variations. It drastically reduces the total number of operations and therefore the power

consumption compared to the classical case. A complexity comparison among the proposed

techniques is also made.

Methods to compute the approximation and the filtering errors of the proposed techniques have

also been devised. It is shown that errors introduced by the proposed techniques are minor ones,

in the studied case. Moreover, a higher precision can be achieved by increasing the AADC

resolution and the interpolation order. Thus, a suitable solution can be proposed for a given

application by making an appropriate trade-off between the accuracy level and the computational

load.

Enhanced adaptive rate filtering techniques have been developed by smartly combining the

interesting features of different proposed techniques. It is shown that the enhanced versions out

perform the previous ones, in terms of computational efficiency and processing quality.

Effectiveness of the proposed filtering techniques for the real life signals will be described in

Chapter 10. A detailed study of the proposed filtering techniques computational complexities by

taking into account the real processing cost at circuit level is a future task.

Saeed Mian Qaisar Grenoble INP 128

Page 147: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

Chapter 8

SIGNAL DRIVEN ADAPTIVE RESOLUTION ANALYSIS

Almost all real life signals are non-stationary in nature. The frequency contents of these signals

vary with time. For proper characterization of such signals, a time-frequency representation is

required. Classically, the STFT (Short-Time Fourier Transform) is employed for this purpose

[118]. The STFT limitation is its fixed time-frequency resolution. To overcome this drawback an

enhanced STFT version is devised [48, 49, 145]. It is based on principles of the activity selection

and the local features extraction. They make it to adapt its sampling frequency and the window

function length plus shape, by following the input signal local variations. This adaptation results

into the proposed technique appealing features, which are the adaptive time-frequency resolution

and the computational efficiency compared to the classical STFT.

The STFT principle is reviewed in Section 8.1. The proposed technique is realised by smartly

combining the interesting features of the non-uniform and the uniform signal processing tools. A

detailed description of the proposed approach is given in Section 8.2. Its appealing features are

illustrated in Section 8.3. Section 8.4, deduces its computational complexity and makes its

comparison with the classical STFT. The proposed technique processing error is discussed in

Section 8.5. Section 8.6, finally concludes the chapter.

8.1 Time Frequency Analysis

The FT (Fourier Transform) provides the input signal frequency information, which means it tells

that how much of each frequency exists in the signal. But it does not inform about when in time

these frequency contents appear. Hence, the FT is a good tool for analyzing the stationary signals,

whose frequency content does not vary with time. All the frequency components exist all the

time.

The STFT (Short-Time Fourier Transform) is a classical tool, used for the time-frequency

characterization of the non stationary signals [118]. It is a little modification of the FT, which

allows inserting the time stamp into the Fourier transformed signal.

The main difference between the STFT and the FT is the windowing operation. The STFT divides

the signal into small enough segments, within which the signal can be assumed to be stationary.

For this purpose a windowing function w is employed. The width of this window must be equal to

the input signal segment, where this stationary condition is valid [118]. The STFT can be

expressed formally by Equation 8.1.

Saeed Mian Qaisar Grenoble INP 129

Page 148: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

! " #$%

&'

&&'2

2

..2. ..)(.)(,

L

Lt

tfj dtetwtxfX

(

(

)(( (8.1)

In Equation 8.1, w is the chosen window function and is the window shift time. L is the window

function length in seconds.

The discrete version of the STFT is of more interest, as it enables computing the STFT of a

sampled signal. The STFT of a sampled signal xn is determined by computing the DFT (Discrete

Fourier Transform) of an N samples segment centred on , which describes the spectral contents

of xn around the instant . Parameter N is defined by Equation 8.2.

SFLN .' (8.2)

In Equation 8.2, L is the effective length in seconds of the window function wn and FS is the

sampling frequency. The discrete STFT can be expressed formally by Equation 8.3.

* + " #,%

&'

&&'

2

2

..2...,

N

Nn

nfj

nn ewxfX

(

(

)(( (8.3)

In Equation 8.3, f is the frequency index, which is normalised with respect to Fs. The signal flow

of the discrete STFT is shown by Figure 8.1.

ADC Band Limited

Analog Signal

x(t)

Sampled Signal

(xn)

Window Function

(wn)DFT

Windowed Signal

(xwn)

X[ , f ]ADC

Band Limited

Analog Signal

x(t)

Sampled Signal

(xn)

Window Function

(wn)DFT

Windowed Signal

(xwn)

X[ , f ]

Figure 8.1. Block Diagram of the STFT.

Parameters L and FS control the STFT time and frequency resolution [118]. In the classical case,

the input signal is sampled at a fixed FS, regardless of its local variations. Here, FS is chosen by

taking into account the highest instantaneous frequency of the input signal. Hence, a fixed L

results into a fixed N (cf. Equation 8.2). In the case, when the spectrum of each windowed block

is calculated with respect to and no overlapping is performed between the consecutive blocks,

the time resolution !t and the frequency resolution !f of the STFT can be defined by Equations

8.4 and 8.5 respectively.

Lt '- (8.4)

N

Ff S'- (8.5)

Saeed Mian Qaisar Grenoble INP 130

Page 149: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

Equation 8.5 shows that for a fixed FS, !f can be increased by increasing N. But increasing N

requires increasing L, which will reduce the STFT time resolution (cf. Equation 8.4). Thus, a

larger L provides better !f but poor !t and vice versa. This conflict between !f and !t shows the

STFT limitation, which is the reason for creation of the MRA (multi resolution analysis)

techniques [135-137]. The MRA techniques provide a good frequency but a poor time resolution

for the low-frequency events and a good time but a poor frequency resolution for the high-

frequency events. It is the type of analysis, best suited for most of the real life signals [135].

In the proposed approach, the fixed resolution dilemma is resolved up to a certain extent by

revising the STFT [48, 49, 145]. The proposed STFT is a smart alternative of the MRA

techniques. The details of this realisation are given in the Following Section.

8.2 Proposed Adaptive Resolution Short-Time Fourier

Transform

The motivation behind the proposed technique is to achieve a smart time-frequency

representation of the time-varying signals. The idea is to adapt the time-frequency resolution

along with the computational load by following the input signal local characteristics. In order to

realize this idea a smart combination of the non-uniform and the uniform signal processing tools

is employed. The proposed technique principle is depicted on Figure 8.2.

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local Parameters for

(Wi)

Windowed Signal

(xwn)

LCADC EASA

Parameters Adaptor and

Window Selector for

(Wi)

Window Decision

(Di)

Reference

Parameters

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Resampling

Frequency

(Frsi)

DFT

X [ i, fi ]

Wn

1

0

(Win)

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local Parameters for

(Wi)

Windowed Signal

(xwn)

LCADC EASA

Parameters Adaptor and

Window Selector for

(Wi)

Window Decision

(Di)

Reference

Parameters

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Resampling

Frequency

(Frsi)

DFT

X [ i, fi ]

Wn

1

0

(Win)

Figure 8.2. Block diagram of the proposed STFT.

The band-limited analog signal is acquired by employing the LCADC, which provides a non-

uniform time partitioned signal at its output. The LCADC output can be used directly for further

non-uniform digital analysis [13, 40, 53]. However in the studied case, the non-uniformity of the

sampling process, which yields information on the signal local features, is employed to select

only the relevant signal parts with the EASA (Enhanced Activity Selection Algorithm).

Furthermore, characteristics of each selected part are analyzed and are employed later on to adapt

Saeed Mian Qaisar Grenoble INP 131

Page 150: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

the proposed system parameters accordingly. A complete procedure of the activity selection and

the local features extraction is detailed in Section 5.1.2. The detail of realizing the adaptive time-

frequency resolution analysis is given in the following.

8.2.1 Adaptive Rate Sampling

The non-uniformity in the LCADC output is a function of the input signal local spectral

properties [26, 40]. Hence, the sampling rate of the LCADC is not unique and it is piloted by the

input signal variations [39]. Let Fsi represents the LCADC sampling frequency for the ith selected

window Wi. Fsi can be specific for each selected window, depending upon Wi length Li in seconds

and the slope of x(t) part laying within this window [44]. It can be calculated by using the

following equations.

iii ttL minmax &' (8.6)

i

ii

L

NFs ' (8.7)

Here, tmaxi and tmini are the final and the initial times of Wi. Ni is the number of non-uniform

samples laying in Wi. Please note that the upper and the lower bounds on Fsi are posed by Fsmax

and Fsmin respectively (cf. Equations 5.4 and 5.5).

The selected signal obtained with the EASA is resampled uniformly before proceeding towards

the further processing. The resampling frequency for the ith selected window Frsi is chosen

depending upon the corresponding extracted features. Once the resampling is done, there are Nri

uniform samples in Wi. The selection procedure of Frsi is detailed as follow.

In the proposed system a reference sampling frequency Fref is chosen such as it remains greater

than and closest to the Nyquist sampling frequency FNyq = 2.fmax. Here, fmax is the input signal x(t)

bandwidth. An appropriate Frsi is chosen depending upon Fref and the effective value of Fsi.

For the case, Fsi " Fref, Frsi is chosen as: Frsi

= Fref. It is done in order to resample the selected

data, lays in Wi closer to the Nyquist frequency. It avoids the unnecessary interpolations during

the data resampling process and so reduces the computational load of the proposed technique.

In the opposite case when Fsi < Fref, Frsi is chosen as: Frsi= Fsi. In this case, it appears that the

data laying in Wi may be resampled at a frequency, which is less than the Nyquist frequency of

x(t) and so it can cause aliasing. Since, the sampling rate of the LCADC varies according to the

slope of x(t) [43, 97, 98]. A high frequency signal part has a high slope and the LCADC samples

it at a higher rate and vice versa. Hence, a signal part with only low frequency components can be

sampled by the LCADC at a sub-Nyquist frequency of x(t). But as far as x(t) amplitude dynamic

#x(t) is adapted to the order of the maximal amplitude range of the LCADC 2Vmax, it crosses

enough consecutive thresholds for a suitable choice of M (application dependent). Therefore, it is

locally oversampled in time with respect to its local bandwidth [37, 45, 46]. Hence, there is no

aliasing problem, when the low frequency relevant signal parts are locally over-sampled in time

at overall sub-Nyquist frequencies.

Saeed Mian Qaisar Grenoble INP 132

Page 151: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

The complete procedure of choosing Frsi and resampling the selected data laying in Wi is

illustrated on Figure 8.3.

IF No IF YesFsi < Fref

Frsi = Fref

(xrn, trn) = Resample [ (xS, tS) @ Frsi ]

Frsi = Fsi

IF No IF YesFsi < Fref

Frsi = Fref

(xrn, trn) = Resample [ (xS, tS) @ Frsi ]

Frsi = Fsi

Figure 8.3. Flowchart of deciding Frsi for Wi.

8.2.2 Adaptive Shape Windowing

The extracted parameters of Wi are passed to the parameters adaptor and the window selector

block (cf. Figure 8.2). It employs the extracted parameters along with the reference ones to decide

Frsi and the windowing function shape for Wi. The procedure of deciding Frsi is clear from

Figure 8.3. The windowing shape decision for Wi is made on the base of following condition.

! ./

012

3 4&'5 &

2

01

1

TttTdandNNif i

end

ii

ref

i (8.8)

Here, t1i represents the 1st sampling instant of Wi and tend

i-1 represents the last sampling instant of

Wi-1. Jointly the EASA and the window selector provide an efficient spectral leakage reduction.

Indeed, spectral leakage occurs due to the signal truncation problem, which causes to process the

non integral number of cycles in the observation interval (cf. Section 6.1.1). Usually an

appropriate smoothening (cosine) window function is employed to reduce the signal truncation.

In the proposed case, as long as the condition 8.8 is true, the leakage problem is resolved by

avoiding the signal truncation [48, 49, 145]. As no signal truncation occurs so no cosine window

is required. In this case, the window decision for the ith selected window Di is set to 1, which

drives the switch to state 1 in Figure 8.2. Otherwise an appropriate cosine window function is

employed to reduce the signal truncation problem. In this case, Di is set to 0, which drives the

switch to state 0 in Figure 8.2.

8.2.3 Adaptive Resolution Analysis

The proposed technique performs adaptive time-frequency resolution analysis, which is not

attainable with the classical STFT. It is achieved by adapting the Frsi, Li and Nri according to the

local variations of x(t). Thus, the time resolution ti and the frequency resolution f i of the

proposed STFT can be specific for Wi and are defined by Equations 8.9 and 8.10 respectively.

ii Lt ! (8.9)

Saeed Mian Qaisar Grenoble INP 133

Page 152: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

i

ii

Nr

Frsf ! (8.10)

Because of this adaptive time-frequency resolution, the proposed STFT will be named as the

ARSTFT (adaptive resolution STFT), throughout the following parts of this chapter. This

adaptive nature of the ARSTFT also leads towards a drastic computational gain compared to the

classical one. It is achieved firstly, by avoiding the unnecessary samples to process and secondly,

by avoiding the use of cosine window function as far as the condition 8.8 is true. The ARSTFT is

defined by Equation 8.11.

" # $ %& '" #()

*

**

2

2

..2...,Re,

i

i

i

i

Nr

Nrn

nfji

nSS

ii ewtxsamplefX

+

+

,+

+ (8.11)

Here, !i and f i are the central time and the frequency index of Wi respectively. f i is normalised

with respect to Frsi. n is indexing the resampled data points laying in Wi. The notation wni

represents that the window function length Li and shape (rectangle or cosine) can be specific for

Wi.

In literature there also exist value able efforts, focusing to overcome the STFT time-frequency

resolution dilemma [147-149]. Their principle approach is to tackle the situation by adapting the

STFT window length. In [147], the adaptive CKD (cone kernel distribution) is employed to

adaptively adjust the STFT window length. In [148], a set of windows is employed and then a

suitable window is chosen among the set by employing some adaptation criteria. In [149], the

algorithm, based on the confidence intervals insertion (non-parametric algorithm), is used for

determining the suitable window length. Note that the features of sampling frequency, window

shape and !i adaptation are not available in the cited work [147-149], which are differentiating the

ARSTFT from them.

8.3 Illustrative Example

In order to illustrate the ARSTFT an input signal x(t), shown on the left part of Figure 8.4 is

employed. Its total duration is 30 seconds and it consists of four active parts. The summary of x(t)

activities is given in Table 8.1.

Activity Signal Component Length (Sec)

1st 0.9.sin(2.pi.50.t) 5

2nd 0.9.sin(2.pi.50.t) 0.4

3rd 0.9.sin(2.pi.200.t) 0.5

4th 0.9.sin(2.pi.500.t) 1.6

Table 8.1. Summary of the input signal active parts.

Saeed Mian Qaisar Grenoble INP 134

Page 153: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

Figure 8.4. The input signal (left) and the selected signal (right).

In Figure 8.4, x-axis is representing the time in seconds and y-axis is representing the input signal

amplitude in volts. Table 8.1 shows that x(t) is band limited between 50 to 500 Hz. For this

example, x(t) is sampled by employing a 3-bit resolution AADC. Thus, Fsmax and Fsmin become

7000 Hz and 700 Hz respectively (cf. Equations 7.8 and 7.9). Fref = 1250 Hz is chosen, which

satisfies the criteria given in Section 8.2.1. The AADC amplitude range 2Vmax is chosen equal to

1.8 V. Hence, the AADC quantum q becomes 0.2571 V (cf. Equation 4.24).

The non-uniformly sampled signal obtained at the AADC output is selected by the EASA. While

applying the EASA, Nref = 4096 is chosen for this example. It satisfies the criterion given in

Section 5.1.2. This choice of Nref leads to six selected windows. The selected signal obtained at

the EASA output is shown on the right part of Figure 8.4. The first three selected windows

correspond to the first three activities and the remaining corresponds to the fourth activity. The

last three selected windows are not distinguishable on Figure 8.4, because they lay consecutively

on the fourth activity. The parameters of each selected window are summarised in Table 8.2.

Wi Li

(Seconds)

Fsi

(Hz)

Fref

(Hz)

Frsi

(Hz)

Ni

(Samples)

Nri

(Samples)

1st 4.99 700 1250 700 3500 3500

2nd 0.39 700 1250 700 280 280

3rd 0.49 2800 1250 1250 1400 625

4th 0.58 7000 1250 1250 4096 731

5th 0.58 7000 1250 1250 4096 731

6th 0.43 7000 1250 1250 3005 536

Table 8.2. Summary of the selected windows parameters.

Table 8.2 exhibits the interesting features of the ARSTFT, which are achieved grace of a smart

combination of the non-uniform and the uniform signal processing tools. Fsi represents the

sampling frequency adaptation by following the local variations of x(t). Ni shows that the relevant

signal parts are locally oversampled in time with respect to their local bandwidths. Frsi shows the

adaptation of the resampling frequency for Wi. It further adds to the ARSTFT computational gain,

by avoiding the unnecessary interpolations during the selected data resampling process. Nri shows

that how the adjustment of Frsi avoids the processing of unnecessary samples during the spectral

computation (cf. Equation 8.11). Li exhibits the EASA dynamic feature, which is to correlate the

window function length with the local variations of x(t). Adaptation of Li, Frsi and Nri leads to the

Saeed Mian Qaisar Grenoble INP 135

Page 154: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

adaptive time-frequency resolution of the ARSTFT. The phenomenon is clear from the values of

ti and f i given in Table 8.3.

Selected Window W1 W2 W3 W4 W5 W6

t i (Seconds) 4.99 0.39 0.49 0.58 0.58 0.43

f i (Hertz) 0.2 2.5 2.0 1.71 1.71 2.33

Table 8.3. The selected windows time and frequency resolution.

Table 8.3 demonstrates that the ARSTFT adapts its time-frequency resolution by following the

local frequency contents of x(t). It provides a good frequency but a poor time resolution for the

low frequency parts of x(t) and vice versa, which is the type of analysis best suited for most of the

real life signals [135].

Moreover, the ARSTFT also adapts its resolution according to the actual length of activity. It is

achieved due to the dynamic feature of the EASA, which allows correlating the reference window

length with the signal activity laying in it (cf. Section 5.1.2). This ARSTFT feature is clear from

ti and f i values of the first two selected windows in Table 8.3. Note that the only difference

between the 1st and the 2nd x(t) activities is their time duration (cf. Table 8.1). As Frs1 = Frs2 and

L1 > L2, so Nr1 > Nr2 (cf. Table 8.2). Therefore, t1 < t2 and f 1 > f 2 (cf. Equations 8.9 and

8.10).

The spectrum of each selected window obtained with the ARSTFT is plotted with respect to !i on

Figure 8.5. The fundamental and the periodic spectrum peaks of each selected window are clear

on Figure 8.5. As Fs1 and Fs2 both remain less than Fref, so Frs1= Fs1 and Frs2= Fs2 are chosen.

Contrary, Fs3 to Fs6 all become greater than Fref, thus Frs3 to Frs6, all are chosen equal to Fref (cf.

Table 8.2). This adaptation of Frsi for Wi can be visualised on Figure 8.5. In the proposed case,

for Wi the spectrum periodic frequency fPP

i is equal to Frsi.

The ARSTFT also adapts the window shape (rectangle or cosine) for Wi. The condition 8.8

remains true for the first three selected windows, thus Di is set to 1. As no signal truncation

occurs so no cosine window is required in this case. On the other hand, the number of samples for

the fourth activity is 11200. Therefore, Nref =4096 leads to the three selected windows for the

fourth activity time span. The condition 8.8 becomes false, thus Di is set to 0. As signal truncation

occurs, so suitable length cosine (Hanning) windows are employed to reduce this effect.

In the classical case, if FS = Fref is chosen, in order to satisfy the Nyquist sampling criterion for

the studied x(t). Then the whole signal will be sampled at 1250 Hz, regardless of its local

variations. Moreover, the windowing process is not able to select only the active parts of the

sampled signal. In addition, L remains static and is not able to adapt with x(t) local variations.

This static nature makes the classical system to process unnecessary samples and so causes an

increased processing activity than the proposed one. For this example, the fixed N = 4096, will

lead to nine fixed L = 3.3 second windows, for the total x(t) time span of 30 seconds. It leads to

the fixed t = 3.3 seconds and f = 0.31 Hz for all nine windows (cf. Equations 8.4 and 8.5).

Saeed Mian Qaisar Grenoble INP 136

Page 155: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

Figure 8.5. The ARSTFT of the selected windows.

8.4 Computational Complexity

This section compares the computational complexity of the ARSTFT with the classical STFT.

The complexity evaluation is made by considering the number of operations executed to perform

the algorithm.

In the classical case, the sampling frequency and the window function length plus shape remains

time invariant. If N is the number of samples laying in the window then the windowing operation

will perform N multiplications between wn and xn (cf. Equation 8.3). While reducing the

truncation effect, the windowing alters the amplitude information of the windowed segment. This

effect is normally compensated by appropriately weighting the windowed segment [115, 116,

123]. The weighting operation will further require N multiplications [123].

The spectrum of the windowed data is obtained by computing its DFT. The DFT complexity is

calculated by considering the complex term, which is involved in the DFT computation. Since

each complex multiplication requires two real multiplications thus DFT performs 2.N

multiplications per out put frequency. To add the results together and taking the real and

imaginary parts separately, the DFT performs 2.(N-1) additions per output sample. For higher N,

2.(N-1) 2.N. Thus, the DFT computational complexity for N output frequencies becomes 2.(N)2

additions and 2.(N)2 multiplications. The combine computational complexity CSTFT of the STFT is

given by Equation 8.12.

$ % $ %--

.

/

00

1

2))

!"## ##!"AdditionstionsMultiplica

STFT NNNAC22

.2.2.2. (8.12)

Here, A is the total number of windows, occur for the observation length of x(t).

Saeed Mian Qaisar Grenoble INP 137

Page 156: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

For the proposed ARSTFT, Fsi, Frsi and wni are not fixed and are adapted for Wi, according to the

local variations of x(t). Due to this adaptation process, this approach locally requires some extra

operations for each selected window compared to the classical one. The EASA performs 2.Ni

comparisons and Ni increments for Wi (cf. Section 5.1.2.). The choice of Frsi and window shape,

require three comparisons. The selected signal is resampled before computing its DFT. The NNRI

is employed for the resampling purpose. The NNRI performs Ni + 2.Nri comparisons and 2.Nri

additions for resampling Wi (cf. Section 7.5.2). If Di = 0, then a cosine window function is applied

on the resampled data, which performs Nri multiplications (cf. Figure 8.2). The weighting of

windowed segment further performs Nri multiplications. The DFT performs 2.(Nri)2 additions and

2.(Nri)2 multiplications for Wi, in order to compute its spectrum. The combine computational

complexity CARSTFT of the ARSTFT is given by Equation 8.13.

$ $ % $ %### ### !"## ## !"## ## !"

tionsMultiplica

ii

Additions

iiI

isComparison

ii

Increments

i

ARSTFT NrNrNrNrNrNNC22

1

.2..2.2.23.2.3 ))))))) (

3

(8.13)

Here, i = 1,2,..,I is indexing the selected window. " is a multiplying factor, its value is 1 if Di = 0

and 0 if Di = 1. From CSTFT and CARSTFT it is clear that there are uncommon operations like

comparisons and increments between the classical and the proposed techniques. In order to make

them approximately comparable it is assumed that a comparison or an increment has the same

processing cost as that of an addition. Following this assumption, comparisons and increments are

merged into the additions count, during the complexity evaluation process. Following this the

Equation 8.13 can be simplified as follow.

$ % $ %### ### !"#### #### !"

tionsMultiplica

ii

Additions

iiiI

i

ARSTFT NrNrNrNrNC22

1

.2..23.2.4.4 ))))) (

3 (8.14)

The computational comparison of the ARSTFT with the classical one is made for results of the

illustrative example. The gains are summarized in Table 8.4.

Time Span

(Seconds)

Gain in Additions

Gain in Multiplications

1st activity 2.7 2.7

2nd activity 211.0 214.0

3rd activity 42.5 43.0

4th activity 12.1 12.4

Table 8.4. Summary of the computational gains.

Table 8.4 shows the gain in additions and multiplications of the ARSTFT over the classical

STFT, for each x(t) activity. It demonstrates that the ARSTFT leads to a significant reduction of

the total number of operations as compare to the classical STFT. This reduction in operations is

achieved by adapting Fsi, Frsi and wni according to the local variations of x(t).

Saeed Mian Qaisar Grenoble INP 138

Page 157: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

8.5 Resampling Error

In the proposed technique the selected non-uniform data obtained at the EASA output is

resampled uniformly. It is done in order to compute its spectrum with the classical DFT. The

resampling changes properties of the resampled signal with respect to the original one. Hence,

this transformation will introduce an additional error into the proposed system output.

Nevertheless, prior to this transformation, one can take advantage of the inherent over-sampling

of the relevant signal parts in the system [45, 46]. Hence, it adds to accuracy of the post

resampling process [37]. The reasons due to which this resampling error occurred and the method

of quantifying this error are detailed in Section 7.6.1. The mean resampling error MREi for each

selected window is calculated by employing Equation 7.35. The results are summarized in Table

8.5.

Selected Window W1 W2 W3 W4 W5 W6

MREi (dB) -26.3 -26.2 -25.9 -24.1 -24.1 -23.8

Table 8.5. Mean resampling error for each selected window.

Results show that this process is accurate enough for a 3-bit AADC. For the higher precision

applications, the approximation accuracy can be improved by increasing the AADC resolution M

and the timer frequency Ftimer along with the interpolation order [38, 46, 127].

The increased M will result into an increased number of samples to be processed and hence

causes an increased computational load. Moreover, the higher order interpolator requires more

operations per approximated observation and so causes an increased computational complexity.

Hence, an improved accuracy can be achieved at the cost of an increased processing activity.

Therefore, by making a suitable compromise between the accuracy level and the processing load,

an appropriate solution should be devised for a specific application.

8.6 Conclusion

The STFT is a basic tool employed for the time-frequency characterization of the non-stationary

signals. Basic concept of the STFT has reviewed. The limitation of the STFT is its fixed time-

frequency resolution. To overcome this drawback an enhanced STFT has been devised.

The proposed ARSTFT is especially well suited for the low activity sporadic signals. It is shown

that the proposed technique adapts its sampling frequency and the window function length plus

shape by following the input signal local variations. Criteria to choose an appropriate Fref and Nref

are described. A complete methodology of choosing Frsi and wni for the ith selected window has

been demonstrated. It is shown that the ARSTFT adapts its time-frequency resolution by

following the local variations of x(t).

The resampling error is calculated. It is shown that the achieved results are of appropriate quality

for the chosen AADC resolution. Moreover, a higher accuracy can be achieved by increasing the

AADC resolution and the interpolation order. Thus, an accuracy improvement can be achieved at

the cost of an increased computational load.

Saeed Mian Qaisar Grenoble INP 139

Page 158: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-II Chapter 8 Signal Driven Adaptive Resolution Analysis

The ARSTFT outperforms the STFT. The first advantage of the ARSTFT over the STFT is the

adaptive time-frequency resolution and the second one is the computational gain. These smart

features of the ARSTFT are achieved due to the joint benefits of the AADC, the EASA and the

resampling, as they enable to adapt Fsi, Frsi, Ni, Nri and wni by exploiting the local variations of

x(t).

The employment of fast algorithms for the spectrum computation, in place of the DFT will further

add up to the computational gain of the ARSTFT. The, performance comparison of the ARSTFT

with the MRA techniques [135-137, 147-149], in terms of computational complexity and quality

is an area of future research.

Saeed Mian Qaisar Grenoble INP 140

Page 159: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

Chapter 9

EFFECTIVE RESOLUTION OF AN ADAPTIVE RATE

ANALOG TO DIGITAL CONVERTER

In previous chapters adaptive rate signal processing and analysis techniques have been devised. It

is shown that the proposed techniques remain computationally efficient while providing

comparable quality results with respect to the corresponding classical approaches. The proposed

solutions bases are the activity acquisition and selection along with the local features extraction.

They are realized by employing a smart combination of the non-uniform and the uniform signal

processing tools.

The basic components employed in each proposed solution are the LCADC (LCSS based ADC),

the Activity Selection Algorithm and the resampler (cf. Chapters 6, 7 and 8). They jointly make

to adapt the data acquisition rate and the system parameters by following the input signal local

characteristics. In combination these tools form a smart adaptive rate A/D conversion process and

will be called as the ARADC (Adaptive Rate ADC), throughout the following parts of this thesis.

The ADC effective resolution is a common parameter to characterize its stated performance. The

resolution can be determined both quasi-statically and dynamically. Quasi-static measures include

DNL (Differential Nonlinearity) and INL (Integral Nonlinearity). Dynamic measures include

SNR (Signal to Noise Ratio), SFDR (Spurious Free Dynamic Range) and NPR (Noise Power

Ratio) [61]. Among different resolution measures the SNR is more frequently employed for the

ADC characterization [60, 61]. Therefore, in this chapter the ARADC SNR is measured and

employed to calculate its effective resolution.

After reviewing the classical ADC SNR in Section 9.1. Section 9.2, devises a method to measure

the ARADC SNR. Simulation results are presented in Section 9.3. A criterion for properly

choosing the different system parameters in order to achieve the desired effective resolution is

also described. Section 9.4, finally concludes the chapter.

9.1 The SNR (Signal to Noise Ratio)

The SNR compares the level of a desired signal to the level of noise. It is defined as the ratio of

RMS (Root Mean Square) value of signal amplitude to RMS value of noise amplitude, which is

usually measured in dB (decibels). Formally it can be expressed by Equation 9.1.

!

"##$

%&

)(

)(log.20)(

NoiseRMS

SignalRMSdBSNR (9.1)

Saeed Mian Qaisar Grenoble INP 141

Page 160: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

9.1.1 Theoretical SNR of an ADC

During the A/D conversions process, the only imprecision caused by an ideal ADC is the

quantization error Qe. This error arises because the analog input signal may take any value within

the ADC amplitude range while the output is a sequence of finite precision samples [60, 61].

A complete description of computing the RMS value of Qe for an ideal ADC is given in Section

3.1. It is shown that for an ideal ADC the RMS value of Qe can be written as follow.

12)(

qQRMS e & (9.2)

By replacing this value of RMS (Qe) in Equation 9.1 and solving results into Equation 9.3.

'(

)*+

,--&

max

)(log.2077.4.02.6)(

V

SignalRMSMdBSNR (9.3)

Here, M represents the ADC resolution in bits and Vmax is the one half of ADC amplitude range.

Normally, the SNR is calculated by employing a monotone FS (full scale) sinusoid at the ADC

input and by using a sequence of samples obtained at the converter output. In this case, Equation

9.3 can be further simplified into Equation 9.4. Here, FS points that the input sinusoid amplitude

dynamic x(t) is of the range of the ADC amplitude range 2Vmax.

76.1.02.6)( ! MdBSNR (9.4)

Equation 9.4 represents the theoretical SNR of an ideal M-bit ADC, in the case of a FS monotone

sinusoid as input. It is important to recall that RMS (Qe) is calculated over the full BWin [62, 64].

Thus, Equation 9.4 gives the ADC theoretical SNR over the BWin. Here, BWin ranges between

[0; Fs / 2], Fs is the chosen sampling frequency in the system.

9.1.2 Practical SNR of an ADC

In addition to the quantization error the practical ADC also introduces other errors during the A/D

conversion process like the time jitter, the comparator ambiguity, etc. [60].

In practice the SNR is measured from the spectrum of a windowed sequence of the ADC output

samples. It is the ratio of the RMS signal amplitude to the square root of the integral of the noise

power spectrum over the frequency band of interest. In this case, the noise spectrum contains

contributions from all error mechanisms present during the conversion process [60, 61]. A

detailed description of calculating the practical ADC SNR is given in [138].

While measuring the converter SNR by spectral analysis, the FFT noise floor should be taken into

account. In fact, the spectral output of the FFT is a series of N points in the frequency domain.

Here, N is the number of samples employed to compute the FFT. As the total covered frequency

range is between [0; Fs], so the FFT resolution is Fs/N [41, 115, 116]. It shows that the FFT acts

like a spectrum analyzer with a bandwidth of Fs / N, which sweeps over the spectrum. It has the

Saeed Mian Qaisar Grenoble INP 142

Page 161: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

effect of pushing the noise down by an amount equal to the PGFFT (FFT process gain), which is

given by Equation 9.5 [60].

"#

$%&

'!2

log.10N

PGFFT (9.5)

The theoretical FFT noise floor is therefore 10.log (N/2) dB below the quantization noise floor,

because of the PGFFT. The process is clear from Figure 9.1 [60]. The FFT noise floor can be

further reduced by increasing N. Moreover, when testing ADCs using the FFT, it is important to

ensure that N is large enough so that the distortion products can be easily distinguished from the

FFT noise floor itself.

Figure 9.1. FFT output of an ideal 12-bit ADC.

Figure 9.1, illustrates the difference between the FFT and the quantization noise floor. It shows

the spectrum of a 12-bit ideal ADC. The theoretical SNR of an ideal ADC can be calculated by

employing Equation 9.4 and it is 74 dB. In the under consideration case, N = 4096 is employed

for the FFT calculation, which results into the PGFFT = 33 dB (cf. Equation 9.5). Thus, the

theoretical FFT noise floor is 74 + 33 = 107 dB, in this case. It demonstrates that the PGFFT

should be subtracted from the measured noise floor, in order to properly calculate the ADC

practical signal to noise ratio SNRreal (cf. Figure 9.1).

9.1.3 The ADC Effective Resolution

The practically achievable ADC resolution is known as its effective resolution. It is measured in

terms of bits and is known as the ENOB (Effective Number of Bits). By knowing the SNRreal of

an ADC its ENOB can be calculated by employing the following Equation [60, 61].

Saeed Mian Qaisar Grenoble INP 143

Page 162: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

02.6

76.1)( (!

dBSNRENOB real (9.6)

9.2 The ARADC SNR

The block diagram of the ARADC is shown in Figure 9.2.

LCADC Non-Uniformly

Sampled Signal

(xn, tn)

ASA

or

EASAResampler

Selected Signal

(xs, ts)

Uniformly

Sampled Signal

(xrn, trn)

Band Limited

Analog Signal

x(t)

Resampling Frequency

(Frsi)

LCADC Non-Uniformly

Sampled Signal

(xn, tn)

ASA

or

EASAResampler

Selected Signal

(xs, ts)

Uniformly

Sampled Signal

(xrn, trn)

Band Limited

Analog Signal

x(t)

Resampling Frequency

(Frsi)

Figure 9.2. Block Diagram of the ARADC. ‘___’ represents the signal flow and ‘---’ represents the

parameter flow.

The signal acquisition process dictates performance of the complete signal processing chain. A

smarter signal acquisition results into an efficient system and vice versa [8]. It is known that most

of the real signals like speech, seismic, biological, communication, Doppler etc. are of the non-

stationary nature. The classical ADCs are time-invariant, hence are parameterized by taking into

account the worst possible case for the considered application. They can not sense the input signal

local variations and capture it at a constant rate. It causes an increased number of samples to be

processed, especially in the case of low activity sporadic signals [37-40, 43, 44-53, 145, 146].

This shortcoming is resolved up to a certain limit by employing the ARADC in the proposed

solutions. It employs a LCADC for the signal digitization (cf. Figure 9.2). The interesting features

of the LCADC are described in Chapter 4. It is shown that its signal acquisition activity is piloted

by the input signal itself. Hence, it can acquire only the relevant signal parts at adaptive rates [37-

39, 43, 97, 98]. One drawback of LCADCs is that the relevant signal parts can be locally sampled

at higher rates compared to the classical case [13, 37, 52]. This drawback is overcome in the

proposed approach by employing the activity selection and the local features extraction process. It

enables resampling the selected data at the same or lower rates compared to the classical

approach (cf. Chapters 6, 7 and 8). Jointly the LCADC, the Activity Selection and the resampler

make to smartly adapt the sampling rate and the system parameters according to the input signal

variations. It is the key of achieving the computational efficiency in the proposed techniques

compared to the counter classical approaches.

Figure 9.2 shows the ARADC different stages. Each stage has its impact on the overall ARADC

ENOB. In order to quantify the impact of each stage, the error sources of each stage are discussed

and a method to compute the SNR at each step is devised in the following subsections.

Saeed Mian Qaisar Grenoble INP 144

Page 163: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

9.2.1 The LCADC SNR

9.2.1.1 Theoretical SNR of a LCADC

In the case of an ideal LCADC the samples amplitudes are known exactly, whereas their

corresponding time stamps are quantized [37, 38, 43, 97, 98]. Hence, the only conversion error

which occurs for an ideal LCADC is the time quantization. It shows that the A/D conversion

process occurs in the LCADC is dual in nature with respect to the classical A/D one.

A complete procedure of computing the LCADC theoretical SNR is detailed in Section 4.2 and it

can be calculated by employing the following Equation.

) * ) *timer

x

x TP

PdBSNR log.20

.3log.10

/

(""#

$%%&

'! (9.7)

Here, Px and Px’ are the powers of the input signal x(t) and of its derivative respectively. Ttimer is

the LCADC timer resolution/period in seconds. It shows that in this case, the SNR does not

depend on the number of quantization levels but on x(t) characteristics and Ttimer.

In the case of a pure sinusoid Equation 9.7 can be simplified as follow.

) *timersigdB TfSNR .log.2019.11 ((! (9.8)

Equation 9.8 shows that for a monotone sinusoid of given amplitude, the SNR is related to the

ratio between fsig / Ftimer. Here, Ftimer = 1 / Ttimer is the timer frequency and fsig is the input sinusoid

frequency. Hence, for a fixed fsig the SNR of an ideal LCADC depends only upon Ttimer. Doubling

of Ftimer halves the quantization noise on per sample basis, which is equivalent to an increase in

resolution by one bit [37, 95]. Note that the increase in Ftimer is not the increase in the LCADC

sampling frequency, but just an increase in its timer resolution. Hence, with the increase in Ftimer

the number of acquired samples remains the same, but their acquisition accuracy is improved.

Contrary, in the classical ADCs, obtaining an extra bit of resolution requires quadrupling the

sampling rate and hence a fourth folds increase in the number of acquired samples [60].

9.2.1.2 Practical SNR of a LCADC

Theoretically the LCADC SNR can be improved as far as it is required by reducing Ttimer. But

practically there is a limit, which is imposed by the analog blocks accuracy [37, 95]. In fact the

analog blocks determine the threshold levels precision. If these levels are known with uncertainty

a, then this error must be added in the quantization noise in Equations 9.7 and 9.8. It will result

into the SNR degradation [37, 95].

Usually the ADC practical SNR is calculated by employing the spectral analysis [138]. The

sampled data obtained at the LCADC output is non-uniformly distributed in time. Hence, its

spectrum can not be properly computed by employing the classical tools. In literature several

methods have been developed for spectral analysis of the non-uniformly sampled data like the

GDFT (General Discrete Fourier Transform), the Lomb’s algorithm etc. In Chapter 6,

performances of the GDFT and the Lomb’s algorithm are studied for the case of level crossing

Saeed Mian Qaisar Grenoble INP 145

Page 164: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

sampled signal. It is shown that these methods are erroneous because of the presence of wideband

spectral noise. Hence, they can not provide a proper value of the LCADC practical SNR.

The spectrum analysis method, proposed in Chapter 6, outperforms the GDFT and the Lomb’s

algorithm in terms of the spectral quality and the computational complexity. But it requires

uniform resampling of the non-uniform selected data, obtained with the activity selection

algorithm [44]. The resampling process changes properties of the resampled data compared to the

original one [121, 122]. Thus, the spectrum obtained with this approach does not provide an

accurate measure of the LCADC practical SNR. It is due to the presence of resampling error

along with the LCADC conversion error on the obtained spectrum.

In context of the above discussion, a novel approach is proposed for the LCADC practical SNR

measurement. It does not require frequency domain transformation and calculates the SNR

directly in time domain. A detailed description of the proposed approach is given as follow.

The practical ADC is characterized by employing a monotone sinusoid [60, 61]. Hence, a similar

signal given by Equation 9.9 is employed in this case.

)...2sin(.)( +, ! tfAtx sig (9.9)

Here, A is the amplitude, fsig is the frequency and ! is the initial phase. For the ease of process

understanding ! = 0 is considered. According to Chapter 2, the sampling instants of a level

crossing sampled signal are defined as follow.

nnn dttt ! (1 (9.10)

1((! nnn ttdt (9.11)

Here, tn is the current sampling instant, tn-1 is the previous one and dtn is the time delay between

the current and the previous sampling instants.

In the case of a mono harmonic signal it is possible to analytically calculate the level crossing

instants [40]. Thus, in this case tn and dtn can be calculated by employing Equations 9.12 and 9.13

respectively.

"#

$%&

'!A

level

ft m

sig

n arcsin...2

1

, (9.12)

"#

$%&

'("#

$%&

'! (

A

level

A

level

fdt mm

sig

n1arcsinarcsin.

..2

1

, (9.13)

Here, levelm and levelm-1 are the mth and the (m-1)th level crossing thresholds. The amplitude of the

nth level crossing sample xn can be calculated by employing Equation 9.14.

mn levelx ! (9.14)

Saeed Mian Qaisar Grenoble INP 146

Page 165: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

Hence, by employing Equations 9.12 to 9.14, firstly an ideal LCSS is implemented for x(t). It

provides the exact knowledge of both time and amplitude values of the obtained samples.

The only error occurs in the ideal LCADC is the time quantization [37, 38, 43]. By assuming that

the time error "t is uncorrelated to the input signal, it is modeled as a white noise. If "tn is the time

quantization occurs for tn, then it can randomly take a value between 0 to Ttimer (cf. Expression

4.1). Thus, tqn (quantized version of tn) can be obtained by employing Equation 9.15.

nnn tttq - ! (9.15)

The time quantization also affects the amplitude value of the corresponding level crossing

sample. In the studied case, the erroneous sample amplitude can be computed as follow.

)...2sin(. nsign tqfAxq ,! (9.16)

(tn, xn) represents the time-amplitude pair of the nth level crossing sample, obtained with the

LCSS. Hence, (tqn, xqn) represents the time-amplitude pair obtained with the ideal LCADC. The

ideal LCADC conversion error per sample point Cqn is given by the absolute difference between

xn and xqn. It can be expressed formally by Equation 9.17.

nnn xqxCq (! (9.17)

The RMS (Cq) for N level crossing samples can be calculated by employing Equation 9.18.

.!

!N

n

nCqN

CqRMS1

2.

1)( (9.18)

Finally the SNR of an ideal LCADC can be calculated by employing Equation 9.19.

)(

)()(

CqRMS

SignalRMSdBSNR ! (9.19)

In the case of a real LCADC there also exists error due to the threshold levels ambiguity [37, 43].

If an is the error introduced due to the quantization levels ambiguity into xqn. Then the nth

erroneous level crossing sample amplitude xen, contains effect of both "tn and an and it can be

calculated by employing Equation 9.20.

nnn axqxe /0! (9.20)

The real LCADC conversion error per sample point Cen is given by the absolute difference

between xn and xen. It can be expressed formally by Equation 9.21.

nnn xexCe (! (9.21)

The RMS (Ce) for N level crossing samples can be calculated by employing Equation 9.22.

Saeed Mian Qaisar Grenoble INP 147

Page 166: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

.!

!N

n

nCeN

CeRMS1

2.

1)( (9.22)

Finally the real LCADC SNR can be calculated as follow.

)(

)()(

CeRMS

SignalRMSdBSNR ! (9.23)

9.2.2 The Activity Selection Algorithm SNR

In the case of ARADC, the level crossing sampled signal obtained at the LCADC output is

selected and windowed by the ASA/EASA [44-51, 145, 146]. The activity selection based

windowing can adapt its length and shape according to the input signal properties (cf. Chapters 6

and 8). For a monotone sinusoid, the activity selection algorithm parameters can be easily

adjusted to avoid the signal truncation problem. In this case, the windowing is performed with the

adaptive length rectangular window function (cf. Chapters 6 and 8), which has no impact on the

spectral peaks amplitude. Hence, for the under consideration case, the employed ASA/EASA

block has no impact on the ARADC output resolution. It just selects the relevant parts of the

LCADC output and passes them to the resampler block (cf. Figure 9.2).

9.2.3 The Resampler SNR

The resampling process requires interpolation, which changes properties of the resampled signal

compared to the original one. The change in properties depends upon the type of interpolation

technique used for the resampling [121, 122].

For the practical LCADC, there exist uncertainties in the time-amplitude pairs of the level

crossing samples (cf. Section 9.2.1). These uncertainties accumulate in the interpolation process

and deliver the overall error at the ARADC output.

If (trn, xrn) represents the time-amplitude pair of the nth interpolated sample. Then the nth

reference sample amplitude xon, which should be obtained by sampling x(t) at trn can be

calculated by employing the following Equation.

)...2sin(. nsign trfAxo ,! (9.24)

The error per interpolated observation Ien is given by the absolute difference between xon and xrn.

It can be expressed formally as follow.

nnn xrxoIe (! (9.25)

The RMS (Ie) can be calculated by employing Equation 9.26.

Saeed Mian Qaisar Grenoble INP 148

Page 167: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

.!

!N

n

nIeN

IeRMS1

2.1

)( (9.26)

Finally SNR of the resampled signal can be calculated by employing Equation 9.27.

)(

)()(

IeRMS

SignalRMSdBSNR ! (9.27)

9.3 The Simulation Results

In previous chapters it is shown that the proposed system resolution varies for different

frequencies within a signal band. The resolution is higher for lower frequencies and vice versa. In

order to determine the system resolution for a specific application, it is desired to characterize the

input signal with a single frequency. In Chapter 4, it is shown that for a base-band signal of

known bandwidth and uniform spectral density, the system resolution is equivalent to that of a

single tone with frequency 1/ 3 of the bandwidth (cf. Section 4.2.1.2). This equivalence, although

only mathematical, helps us determining the proposed technique performance for different classes

of signals. Following this idea, a monotone sinusoid of 2300 Hz frequency is employed in the

studied case. It can characterize the proposed system resolution for an input signal, whose

bandwidth lays between DC to 4000 Hz. Speech is an example of such cases.

The employed input details are as: x(t) = Vmax.Sin(2.#.2300.t). Here, Vmax = 0.9 V is chosen.

According to [45, 46], the LCADC sampling frequency for a monotone sinusoid can be defined

as follow.

) *12..2 (! M

sigLC fFs (9.28)

Equation 9.28, shows that for a fixed fsig, FsLC is a function of M. Hence, for a given x(t) cycles K,

the number of level crossing samples NLC is directly proportional to M. According to Section

9.1.2, the accuracy of measured SNR also depends upon the number of converted samples

involved in this process. Thus, a lower bound on the number of samples Nmin = 8192 is chosen

and then for a given M an appropriate K is chosen which satisfies the condition: NLC $ Nmin.

9.3.1 The SNR of an Ideal LCADC

The SNR of an ideal LCADC is measured for different values of M and Ttimer, by employing the

method discussed in Section 9.2.1. In first case, Ttimer is fixed to 1 !sec and M is varied between

[3: 12] bits. The obtained results are summarized in Table 9.1.

Results from Table 9.1, show the LCADC SNR independency from M, which is in coherence

with the theoretical SNR formula, given by Equation 9.8.

Saeed Mian Qaisar Grenoble INP 149

Page 168: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

K M q NLCFsLC

(Hz)

SNR

(dB)

1000 3 0.257 14000 32200 41.1

500 4 0.12 15000 69000 41.3

250 5 0.0581 15500 142600 41.3

125 6 0.0286 15750 289800 41.4

65 7 0.0142 16510 584200 41.4

40 8 0.0071 20400 1173000 41.4

20 9 0.0035 20440 2350600 41.4

10 10 0.0018 20460 4705800 41.4

10 11 8.8e-004 40940 9416200 41.4

10 12 4.4e-004 81900 18837000 41.4

Table 9.1. The ideal LCADC SNR for Ttimer = 1! second and variable M.

As FsLC increases with the increase in M, so the number of samples for given K is directly

proportional to M (cf. Equation 9.28). A question which arises from this statements is that as the

modeled "tn is equally probable to occur between [0; Ttimer[, so the error impact due to the time

quantization should be increased with the increase in M. But the SNR results, summarized in

Table 9.1 negate this concept. This conflict can be resolved by reconsidering the impact of "tn on

the corresponding sample amplitude, given by the following Equation.

nnn tsa -- .! (9.29)

Here, "an is the translation of "tn impact on the nth level crossing sample amplitude. sn is the signal

slope between the (n-1)th and the nth level crossing sample and can be computed as follow.

n

nn

ndt

xxs 1((

! (9.30)

In the studied case, the LCADC thresholds are uniformly placed. Therefore, its quantum q is

unique and can be defined by Equation 9.31 [37, 38, 43, 97, 98].

12

.2 max

(!

M

Vq (9.31)

Hence, in this case Equation 9.30 can be simplified as follow.

n

ndt

qs ! (9.32)

Equations 9.31 and 9.32 answer the above question. They show that for a fixed 2Vmax, an increase

in M on one hand increase NLC while on the other hand it decreases sn. Thus, for the same "tn, "an

decreases by increasing M. Therefore, while increasing M the time quantization error distributes

in more samples, while keeping the overall impact same on the LCADC SNR.

Saeed Mian Qaisar Grenoble INP 150

Page 169: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

For a fixed fsig, the LCADC theoretical SNR varies as a function of Ttimer (cf. Equation 9.8). In

order to demonstrate this statement the simulations are performed for fixed fsig = 2300 Hz and by

varying Ttimer between [22; 2-5] ! seconds. Table 9.1 shows that the LCADC SNR is independent

of M. Hence, simulations are performed for fixed M=3 and K=1000. The parameters q=0.257 V,

NLC=14000 and FsLC=32200 Hz are also fixed in this case. The obtained results are summarized

in Table 9.2.

Ttimer

( sec)

SNRLCADC-Th

(dB)

SNR

(dB)

22 29.53 29.24

21 35.55 35.33

20 41.58 41.21

2-1 47.60 47.44

2-2 53.62 53.44

2-3 59.64 59.26

2-4 65.66 65.37

2-5 71.66 71.35

Table 9.2. The ideal LCADC SNR for M = 3 and variable Ttimer.

In Table 9.2, SNRLCADC-Th, represents the theoretical SNR, computed for the given parameters by

employing Equation 9.8. These results show accordance between the theoretical and the

simulated results, which verifies the proposed SNR measurement approach authenticity.

The SNR values show that for fixed fsig and in the absence of a, the LCADC SNR is

incremented by 6 dB for each halving in Ttimer. The upper bound on the achievable SNR is posed

by the simulating system accuracy.

Although the LCAD SNR is independent of M, yet an appropriate choice of M is required. It

should be taken large enough to ensure a proper reconstruction of the acquired signal [37, 38, 43,

97]. This phenomenon is already discussed in Chapter 4 (cf. Section 4.2.2).

9.3.2 The SNR of a Practical LCADC

In the case of a practical LCADC, the threshold levels ambiguity error a also occurs along with

the time quantization error "t. An appropriate method of modeling "t is described in Section

9.3.1. The modeling of a is not straight forward and it depends upon the circuit architecture and

the technology employed for its implementation.

A study on a for different LCADC implementations is out of the scope of this chapter. Here,

example of the AADC [43], is taken into account. In this case, the threshold levels are generated

with a DAC (D/A Converter) (cf. Figure 4.13). Hence, for the study purpose a 3-bit DAC is

implemented in the Cadence circuit design tool, by using the STMicroelectronics 0.13-1m CMOS

technology. The DAC consists of a 3 to 8 decoder, a switch network and a resistor network as

shown in Figure 9.3.

Saeed Mian Qaisar Grenoble INP 151

Page 170: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

3 to 8

DecoderSwitch Network

Resistor

Network

S3

S1

S2 vout

3 to 8

DecoderSwitch Network

Resistor

Network

S3

S1

S2 vout

Figure 9.3. 3-bit DAC block diagram.

In Figure 9.3, S1, S2 and S3 are the digital inputs. vout is the output analog voltage, which is in

proportion to the digital signals combination, applied at the input. The time delay of the proposed

DAC is calculated for different input combinations and in average it is 144.25 p seconds. The

DAC static power consumption Pstat depends on the input combinations. It ranges between 65.3

nw to 68.03 nw for the combinations of 000 to 111 respectively. The average Pstat is calculated

and it is 66,87 nw. In the studied case, the inputs are chosen in such a way that vout achieves its all

states within 0 to 4 n seconds. Hence, the DAC dynamic energy consumption Edyn is measured for

this time period and it is 3.32 p Joule. Pdyn is also calculated by employing Edyn and it is 0.763

mw.

a mainly occurs because of the process and the mismatch variations, introduce during the circuit

fabrication. Hence, effect of the process and the mismatch variations on vout is studied for

different input combinations by employing the Monte Carlo simulation. It is found that due to the

process variations vout varies within ±0.21% of vout. Similarly due to the mismatch variations vout

varies within ±0.11% of vout. Finally, vout variation due to the combine effect of process and

mismatch effect is calculated and it is ±0.23% of vout. Following it, a is chosen equal to ±0.23%

of xn and xen is computed by employing Equation 9.20. Finally the LCADC real SNR is computed

by employing Equation 9.23.

The simulations are performed for M=3 Hz and by varying Ttimer between [22; 2-5] ! seconds. The

remaining parameters are kept similar as employed in the case of ideal LCADC. The obtained

results are summarized in Table 9.3.

Ttimer

( sec)

SNRLCADC-Th

(dB)

SNRreal

(dB) ENOBLCADC

22 29.53 29.39 4.59

21 35.55 35.30 5.57

20 41.58 41.16 6.54

2-1 47.60 47.05 7.52

2-2 53.62 52.88 8.49

2-3 59.64 56.71 9.13

2-4 65.66 58.93 9.50

2-5 71.66 59.45 9.58

Table 9.3. The LCADC real SNR for M = 3 and variable Ttimer.

In Table 9.3, ENOBLCADC is the LCADC effective number of bits and is calculated by employing a

similar relation to Equation 9.6. In a practical LCADC the conversion error mainly consists of "t

and a. Table 9.3 demonstrates that how a is limiting the ENOBLCADC. In the studied case, for

higher Ttimer values [22; 2-2] µ seconds the major error occurs because of "t and the employed a

has minor impact on the SNRreal. Contrary, with the further reduction in Ttimer the error occurs

because of a is getting significant compared to the error introduced by "t. Hence, for lower

Ttimer, a is the main limiting factor on the SNRreal. For the employed a the limit on the

Saeed Mian Qaisar Grenoble INP 152

Page 171: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

achievable SNRLCADC is around 59 dB for Ttimer=2-5 µ seconds. Further reduction in Ttimer will not

introduce noteworthy gain in the LCADC SNRreal, except by achieving an appropriate reduction

in a.

9.3.3 The SNR of an ARADC

The non-uniformly sampled signal, obtained at the LCADC output is selected and windowed by

the EASA. Nref is chosen in such a way that it remains greater than NLC, for all considered values

of M. Hence, the EASA behaves like an adaptive length rectangular windowing process and

therefore it has no impact on the ARADC SNR (cf. Section 9.2.2).

The selected signal obtained at the EASA output is resampled uniformly. The resampling is

performed by employing interpolation. A large range of interpolation functions like nearest

neighbor, sample & hold, linear, polynomial, spline, etc. is available. As we are targeting for the

computationally efficient solutions, so the NNR (Nearest Neighbor Resampling) and the linear

interpolations are employed for the resampling purpose.

The SNR of the uniformly sampled data, obtained in cases of the NNR and the linear

interpolations is calculated by employing the method discussed in Section 9.2.3. It is performed

by varying M and Ttimer between [3; 8] and [22; 2-5] ! seconds respectively. The obtained results

by are plotted on Figures 9.4 and 9.5, respectively for the NNR and the linear interpolations.

Figures 9.4 and 9.5, show that for a fixed Ttimer the resampled data SNR increases with the

increase in M. The reason behind is that for any kind of employed interpolation, the upper bound

on Ien is posed by q [44]. Here, Ien is the interpolation error, introduced by per resampled

observation (cf. Equation 9.25) and q is the LCADC quantum (cf. Equation 9.31). From Equation

9.31 it is clear that an increase in M causes a reduction in q, which consequently results into an

increased SNR resampled output.

For a fixed M and Ttimer, the ARADC SNR depends upon the type of employed interpolation for

the resampling purpose. Figures 9.4 and 9.5 show that for a given M if Ttimer is reduced then it is

also required to increase the interpolation order, in order to take full advantage of a reduced Ttimer.

Moreover, for a fixed M and Ttimer, incrementing the interpolation order beyond a certain level

does not improve the ARADC SNR significantly.

The above discussion shows that the ARADC conversion error mainly consists of "t, a and Ie.

Once the threshold levels are established, a remains constant [37]. For given a once an

appropriate Ttimer is decided, then the next step is an appropriate choice of M and the interpolation

order. M should be chosen in such a way that it assures a proper signal reconstruction. Then for

given a, Ttimer and M a suitable order interpolation should be employed. Ideally this choice

should be made in such a way that the resampler does not pose any impact on its input SNR.

While employing the linear interpolation, M=8 is sufficient in order to approach the upper SNRreal

bound (cf. Figure 9.5). For M=8, the limit imposed by the time-amplitude pairs uncertainties is

approximately reached, hence further increase in M will not significantly improve the ARADC

performance. On the other hand, in the NNR interpolation case, M=10 will be required in order to

achieve the upper SNRreal bound (cf. Figure 9.4). It follows that for certain "t and a, the upper

achievable SNRreal bound can be obtained for lower M with the increase in interpolation order. As

Saeed Mian Qaisar Grenoble INP 153

Page 172: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

an example the same SNRreal bound can be achieved for M=4 and Ttimer=2-5 ! seconds, while

resampling the data with a fourth order interpolator.

3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 820

25

30

35

40

45

50

55

60

No. of Bits

SN

R (

dB

)

SNR-0.5 uSec.

SNR-0.25 uSec.

SNR-0.125 uSec.

SNR-0.0625 uSec.

SNR-0.0313 uSec.

Figure 9.4. The ARADC SNR curves obtained with the NNR interpolation, for Ttimer = {22, 21,…, 2-5} !

seconds and by varying M between [3; 8] for each value of Ttimer.

3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 820

25

30

35

40

45

50

55

60

No. of Bits

SN

R (

dB

)

SNR-0.5 uSec.

SNR-0.25 uSec.

SNR-0.125 uSec.

SNR-0.0625 uSec.

SNR-0.0313 uSec.

Figure 9.5. The ARADC SNR curves obtained with the linear interpolation, for Ttimer = {22, 21,…, 2-5} !

seconds and by varying M between [3; 8] for each value of Ttimer.

In fact, the resampling error has two sources, the truncation and the rounding errors [37, 133].

The rounding error occurs because of the level crossing samples time-amplitude pairs

uncertainties. The truncation error occurs because of the finite interpolator order, employed for

the resampling purpose. Hence, for a chosen Ttimer and a the rounding error becomes constant,

while the truncation error reduces with the increase in the interpolation order. This is the reason

Saeed Mian Qaisar Grenoble INP 154

Page 173: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

that for an appropriate Ttimer and a a higher ARADC SNR can be achieved by employing a

higher order interpolator.

While reducing the truncation error an increase in the interpolation order also increases the

system computational complexity and the memory requirements. Moreover, when the limit posed

by the rounding errors has reached, further reduction in the truncation error will not improve the

ARADC performance [140]. Hence, for a targeted application an appropriate M should be chosen,

which ensures the proper signal reconstruction [37, 38, 43]. Then suitable Ttimer and a should be

achieved, which assure the targeted SNRreal. Finally, an appropriate order interpolation should be

employed which keeps the system computationally efficient, while does not much affecting the

upper SNRreal bound for the chosen parameters.

In order to compare the ARADC performance with the classical ADC their SNR curves are

plotted on Figure 9.6. The SNR values in the classical case are obtained by employing Equation

9.4.

3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 815

20

25

30

35

40

45

50

55

60

No. of Bits

SN

R (

dB

)

SNR-ADC

SNR-ARADC-NNR

SNR-ARADC-Linear

Figure 9.6. The SNR curves obtained in cases of the classical ADC and the ARADC by varying M

between [3; 8].

In Figure 9.6, SNRARADC-NNR and SNRARADC-Linear, represent SNRreal obtained in cases of the NNR

and the linear interpolations respectively. Results show that in the studied case, for each value of

M, SNRARADC-NNR and SNRARADC-Linear remain higher than the corresponding classical one. It

demonstrates that for an appropriate choice of Ttimer, a and the interpolation order a higher

ENOB can be achieved for a given M, in the case of ARADC compared to the classical ADC.

The above results demonstrate attractiveness of the ARADC compared to the classical ADC. The

principle advantage of the ARADC is to minimize the analog circuitry that acquires the input

signal, while providing higher resolution output by employing some digital post processing on the

level crossing acquired data. Reducing the analog circuitry at front end makes it possible to

decrease the chip area and the power consumption [37, 38, 43, 97]. On the other hand, advents in

the integrated circuit technology, such as the reduced cell size make the use of digital circuits

more effective at a reduced cost. In this context, performing majority of the processing functions

in the digital domain becomes more attractive. This is the reason that in the ARADC the

Saeed Mian Qaisar Grenoble INP 155

Page 174: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 9 Effective Resolution of an Adaptive Rate Analog to Digital Converter

computational power offered by the digital domain is employed to generate a higher resolution

output by employing the time-amplitude level crossing samples pairs at the input.

9.4 Conclusion

The ARADC forms base of the proposed signal processing and analysis solutions, devised in

previous chapters. It is especially suitable for the low activity sporadic signals like seismic,

electrocardiogram, phonocardiogram, etc. For such signals, the ARADC leads the proposed

solutions to achieve a drastic computational gain compared to the counter classical ones (cf.

Chapters 6, 7 and 8).

The A/D conversion process is usually characterized in terms of its effective resolution, which is

measured by employing the converted output SNR. In this context, a method to compute the

ARADC SNR has been devised. It is shown that results obtained with the proposed method are in

coherence with the theoretical ones, which verifies the proposed approach correctness.

It has been demonstrated that for an appropriate choice of Ttimer, a and the interpolation order,

the ARADC achieves higher effective resolution compared to the classical ADC for each

employed value of M.

For a targeted application an appropriate set of parameters (M, Ttimer, a and the interpolation

order) should be found, which provides an attractive trade off between the system computational

complexity and the delivered output quality, while ensuring the proper signal reconstruction.

It is known that for a suitable choice of M, the LCADC locally oversamples the relevant signal

parts [38, 43, 45, 46, 52]. The LCADC and the activity selection algorithm adapt the complete

ARADC sampling rate. Hence, the ARADC behaves as an oversampling converter. It hints that

by introducing a digital decimator at the resampler output, the ARADC effective resolution can

be further improved (cf. Section 3.3.2). An adaptive rate digital decimator can be efficiently

realized by employing an appropriate among the adaptive rate filtering techniques, described in

Chapter 7. Implementation of this idea is a future task.

Saeed Mian Qaisar Grenoble INP 156

Page 175: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

Chapter 10

PERFORMANCE OF THE PROPOSED TECHNIQUES FOR

REAL SIGNALS

In previous chapters, smart features of the proposed techniques have been demonstrated with the

help of examples. For the ease of understanding, simple time-varying signals are employed.

Usually the real life signals are more complex. Hence, the proposed techniques effectiveness for

real signals is evaluated in this chapter.

The proposed adaptive rate filtering techniques performance is studied for a speech application in

Section 10.1. A comparison among the proposed filtering techniques and with the classical one is

made, in terms of computational complexity and filtering quality. In Section 10.2, the proposed

adaptive resolution short-time Fourier transform is compared with the classical one, for a chirp-

like signal. It is done in terms of time-frequency resolution adaptation, computational complexity

and the output quality. The adaptive rate analog to digital converter performance is compared to

the classical converter for a speech signal acquisition in Section 10.3. Section 10.4, finally

concludes the chapter.

10.1 The Signal Driven Adaptive Rate Filtering

In order to evaluate performances of the proposed adaptive rate filtering techniques described in

Chapter 7, a speech signal x(t) shown on Figure 10.1-a is employed. The reason of employing a

speech signal is its interesting time-varying nature [112, 142]. According to [141], during a

conversation the speech activity is 25% of the total dialog time. Therefore, it is an interesting

signal to acquire and process with the proposed methods [44-51, 145, 146]. x(t) is a 1.6 seconds,

[50; 4000] Hz band-limited signal, corresponding to a three word sentence.

The EARD, the EARR and the ARDI are the proposed filtering techniques enhanced versions (cf.

Section 7.7). Hence, the evaluation is performed only for these techniques.

The goal is to determine the pitch (fundamental frequency) of x(t) in order to find the speaker’s

gender. For a male speaker, the pitch lays within the frequency range [100; 150] Hz, whereas for

a female speaker, the pitch lays within the frequency range [200; 300] Hz [112]. The reference

frequency is chosen as Fref = 16 kHz, which is a common sampling frequency for speech. Among

different described LCADCs (cf. Chapter 4), the AADC is employed for acquiring the speech

signal [43]. A 4-bit resolution AADC is used for digitizing x(t) and therefore we have Fsmin = 1.5

kHz and Fsmax = 120 kHz (cf. Equations 7.8 and 7.9). The AADC amplitude range is set to

2Vmax=1.8 V, which leads to a quantum q=0.12 V. The amplitude of x(t) is normalized to 0.9 V in

order to avoid the AADC saturation.

Saeed Mian Qaisar Grenoble INP 157

Page 176: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

The studied signal is part of a conversation and during a dialog the speech activity is 25% of the

total dialog time [141]. A classical filtering system would remain active during the total time

span. However, the proposed filtering systems will remain active only during 25% of the dialog

time, which will return into the reduced system power consumption. A speech signal mainly

consists of vowels and consonants. Consonants are of lower amplitude compared to vowels [112,

142]. In order to determine the speaker’s pitch, vowels are the relevant parts of x(t). For q=0.12

v, consonants are ignored during the signal acquisition process and are considered as low

amplitude noise. In contrast, vowels are locally over sampled like any harmonic signal [13, 45,

46]. This smart signal acquisition further avoids the processing of useless samples, within 25% of

x(t) activity and so further improves the proposed techniques computational efficiency.

The non-uniformly sampled signal obtained at the AADC output is selected and windowed by the

ASA (Activity Selection Algorithm) [44]. In order to apply the ASA, Lref =0.5 seconds is chosen,

which satisfies the criteria given in Section 5.1.1, for this case. Lref =0.5 seconds results into

Nmax=60000 in this case (cf. Equation 5.7). The ASA delivers three selected windows, which are

shown on Figure 10.1-b. Parameters of each selected window are summarized in Table 10.1.

Selected Window Li (Sec) Ni (Samples) Fsi (Hz)

W1 0.2074 2360 11379

W2 0.1136 347 3054

W3

0.1210 265 2190

Table 10.1. Summary of the selected windows parameters obtained with the ASA.

Although consonants are partially filtered out during the data acquisition process, yet for proper

pitch estimation, it is required to eliminate the remaining effect of high frequencies still present in

x(t). To this aim, a bank of reference low pass filters is designed with the standard Parks-

McClellan algorithm. In this case, reference filters bank length Q is chosen equal to 11. Hence,

=1450 Hz becomes (cf. Equation 7.38). Parameters of the reference FIR filters are summarized

in the following Table.

Cut-off Freq

(Hz)

Transition Band

(Hz)

Pass-Band Ripples

(dB)

Stop-Band Ripples

(dB)

Frefc

(Hz) Pc

300 300~400 -25 -80 1500 38

300 300~400 -25 -80 2950 75

300 300~400 -25 -80 4400 112

300 300~400 -25 -80 5850 148

300 300~400 -25 -80 7300 185

300 300~400 -25 -80 8750 222

300 300~400 -25 -80 10200 258

300 300~400 -25 -80 11650 295

300 300~400 -25 -80 13100 332

300 300~400 -25 -80 14550 368

300 300~400 -25 -80 16000 405

Table 10.2. Summary of the reference filters bank parameters, implemented for the EARD, the EARR and

the ARDI techniques.

Saeed Mian Qaisar Grenoble INP 158

Page 177: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

To find the pitch, we now focus on W2 which corresponds to the vowel ‘a’. A zoom on this signal

part is plotted on Figure 10.1-c. The chosen values of Frefc and the calculated values of Frs2,

d2/u2, Nr2 and PP

2 for the EARD, the EARR and the ARDI techniques are summarized in Table

10.3. The procedure of calculating these values is clear from Section 7.7.

W2Frefc Frs2 Nr2 d2/u2 P2

EARD 4400 4400 500 1.0 112

EARR 4400 3054 347 1.44 77

ARDI 2950 3054 347 1.04 77

Table 10.3. Values of Frefc, Frs2, Nr2, d2/ u2 and PP

2 for the EARD, the EARR and the ARDI techniques.

In order to make a performance comparison of the proposed techniques with the classical one, the

sampling frequency and the window function length are chosen equal to Fref and Lref in the

classical case. Computational gains of the EARD, the EARR and the ARDI techniques over the

classical one are computed by employing Equations 7.22, 7.44, 7.45 and 7.46. The obtained

results for W2 are summarized in the following table.

W2Gain in Additions Gain in Multiplications

EARD 11.62 11.71

EARR 16.17 16.26

ARDI 16.21 16.29

Table 10.4. Computational gains of the EARD, the EARR and the ARDI techniques over the classical one,

for W2.

Table 10.4 confirms the proposed techniques computational efficiency compared to the classical

approach. It is gained firstly, by achieving the smart signal acquisition and secondly, by adapting

the sampling frequency and the filter order according to the local variations of x(t). When

considering a complete dialogue, the proposed techniques will also take advantage of the idle x(t)

parts (75%), which will further induce additional gains compared to the classical approach.

From Tables 10.1 and 10.3 it is clear that the condition Fs2<Fref is valid and d2/u

2 is a fractional

one. Thus, the filtering process for the EARD, the EARR and the ARDI will differ (cf. Figures

7.14, 7.15 and 7.16), which makes it possible to compare their performances.

For the studied application, the conditions 7.47 and 7.48 remain true for W2, so the EARR and the

ARDI techniques remain computationally more efficient than the EARD. Moreover, the condition

7.49 remains true. Hence, the ARDI technique remains the most processing efficient among the

proposed ones.

In the studied case, for both the EARR and the ARDI, an online resampling of the chosen

reference filter is required (cf. Table 10.3). As d2 for the EARR is higher than u2 employed in the

ARDI, so a higher risk of filtering quality loss occurs for the EARR (cf. Section 7.6.2). It shows

more logic of comparing the EARR filtering quality with the reference one. Therefore, spectra of

the filtered signal laying in W2, obtained with the reference filtering (cf. Section 7.6.2), the EARD

and the EARR are plotted respectively on Figures 10.1-d, 10.1-e and 10.1-f. Note that spectra in

cases of the EARD and the EARR are obtained by employing the method devised in Chapter 6.

Saeed Mian Qaisar Grenoble INP 159

Page 178: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

Spectra on Figure 10.1, show that the fundamental frequency is about 215 HZ. Thus, one can

easily conclude that the analyzed sentence is pronounced by a female speaker. Although it is

required to perform the fractional decimation of the reference filter for the EARR technique, yet

the spectrum of filtered signal obtained with the EARR is quite comparable to the spectrum

obtained in cases of the EARD and the reference filtering. It shows that for the chosen system

parameters, results delivered by the proposed techniques are sufficiently accurate, for the studied

application.

10.1- b 10.1- c10.1- a

10.1- f10.1- d 10.1- e

Figure 10.1. On the top, the input speech signal (10.1-a), the selected signal obtained with the ASA (10.1-

b) and a zoom of the second window W2 (10.1-c). On the bottom, a spectrum zoom of the filtered signal

laying in W2, obtained with the reference filtering (10.1-d), with the EARD (10.1-e) and with the EARR

(10.1-f) respectively.

10.2 The Adaptive Resolution Short-Time Fourier Transform

The instantaneous frequency of a chirp signal varies with time. The chirp-like signals are

common in real life applications like sonar and radar signals, spread spectrum communications

etc. Being a common example of various time-varying signals, it is employed to study the

proposed ARSTFT (Adaptive Resolution Short-Time Fourier Transform) performance.

The input chirp signal x(t) is bandlimited between [10; 500] Hz and its total duration is 8 sec. In

the first quarter of time span, its frequency rises from 10 Hz to 500 Hz and in the second quarter,

it falls from 500 Hz to 10 Hz. The same pattern is repeated onwards. x(t) frequency pattern can be

visualized from Fig. 10.2.

A 3-bit resolution AADC is used for digitizing x(t)and therefore, we have Fsmin=140 Hz and

Fsmax=7000 Hz (cf. Equation 7.8 and 7.9). By following the criterion given in Section 8.2.1,

Fref=1250 Hz is chosen, in this case. The AADC amplitude range is always set to 2Vmax=1.8 V,

which leads to a quantum q=0.257 V. The amplitude of x(t) is normalized to 0.9 V in order to

avoid the AADC saturation.

Saeed Mian Qaisar Grenoble INP 160

Page 179: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

0 1 2 3 4 5 6 7 80

100

200

300

400

500

Time Axis

Fre

quency A

xis

Figure 10.2. The input signal frequency pattern.

With the given specifications, 10858 samples are obtained at the AADC output. In order to apply

the EASA (Enhanced Activity Selection Algorithm), Nref =1024 is chosen, which satisfies the

criterion given in Section 5.1.2. For the chosen Nref, EASA delivers 11 selected windows. The

selected windows parameters are summarized in Table 10.5.

Wi Li

(Seconds)

Fsi

(Hz)

Fref

(Hz)

Frsi

(Hz)

Ni

(Samples)

Nri

(Samples)

1st 0.93 1099 1250 1099 1024 1024

2nd 0.32 3204 1250 1250 1024 400

3rd 0.24 4177 1250 1250 1024 300

4th 0.20 4945 1250 1250 1024 250

5th 0.18 5629 1250 1250 1024 225

6th 0.16 6205 1250 1250 1024 200

7th 0.15 6575 1250 1250 1024 187

8th 0.31 3294 1250 1250 1024 387

9th 0.38 2697 1250 1250 1024 475

10th 0.53 1910 1250 1250 1024 662

11th 0.77 797 1250 797 618 618

Table 10.5. Summary of the selected windows parameters.

The time-frequency resolution of the ARSTFT is calculated for each selected window, by

employing Equations 8.9 and 8.10. The results are summarized in Table 10.6.

Wi ti (Seconds) f i (Hz)

1st 0.93 1.1

2nd 0.32 3.1

3rd 0.24 4.2

4th 0.20 5.0

5th 0.18 5.5

6th 0.16 6.3

7th 0.15 6.6

8th 0.31 3.3

9th 0.38 2.6

10th 0.53 1.8

11th 0.77 1.0

Table 10.6. The selected windows time-frequency resolution.

Saeed Mian Qaisar Grenoble INP 161

Page 180: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

Tables 10.5 and 10.6, jointly demonstrate the ARSTFT signal driven nature. They show the

adaptation of its sampling frequency and time-frequency resolution by following the input signal

local variations. It provides a good time but a poor frequency resolution for the high frequency

parts of x(t) and vice versa, which is the type of analysis best suited for most of the real life

signals [135].

Contrary, for the classical STFT, the sampling frequency and the window function remain time

invariant. If the sampling is performed at Fref =1250 Hz, then for Nref =1024, 10 windows will be

obtained, among them each one will be of 0.82 sec length. It will lead towards its fixed t=0.82

seconds and f=1.2 Hz values (cf. Equations 8.4 and 8.5).

In the ARSTFT case, the resampling of the selected signal is required. It is done by employing

the NNRI (Nearest Neighbor Resampling Interpolation). The resampling error is calculated by

employing the method, described in Section 8.5. It is bounded by -23.8 dB for all selected

windows. It shows the proposed system accuracy for the chosen parameters. In case of high

precision applications, further resampling accuracy can be achieved by increasing the AADC

time-amplitude resolution and the interpolation order [48]. Hence, an improved accuracy can be

achieved at the cost of an increased computational complexity (cf. Section 8.5).

The computational gain of the ARSTFT over the classical one is also calculated by employing

Equations 8.12 and 8.14. It shows 3.8 and 3.9 times gains in additions and multiplications

respectively. It confirms the out performance of the ARSTFT over the classical one, even in the

case of a continuously varying chirp-like signal. It is achieved due to the joint benefits of the

AADC, the EASA and the resampling, as they make to adapt the sampling frequency and the

window function (length plus shape) according to the signal local characteristics [48, 49, 145].

10.3 The Adaptive Rate Analog to Digital Converter

The ARADC (Adaptive Rate ADC) forms base of the proposed signal processing and analysis

techniques (cf. Chapters 6, 7 and 8). A complete procedure for computing its effective resolution

is devised in Chapter 9. In order to evaluate its performance for real life signals, a speech signal

x(t), shown on the top part of Figure 10.3 is employed. x(t) is bandlimited between [50; 4000] Hz

and its total duration is 12 seconds. x(t) activity length is 6.2 seconds, which becomes 51.6 % of

its total span.

A good quality speech acquisition usually requires 13-ENOBs (Effective Number of Bits), while

employing a uniform ADC [144]. Here, the term uniform points that the ADC quantization step q

remains unique. A similar quality speech can be acquired by using an 8-ENOBs uniform

converter, while employing it with a companding algorithm [143, 144]. The A-law and the mu-

law are the most frequently employed speech companders. They amend the speech signal

amplitude dynamic within the given range, by following the auditory motivation and therefore

improve effectiveness of the provided ADC resolution [144].

The frequency contents of a speech signal vary continuously within the given bandwidth [112,

142]. In previous chapters it is shown that the proposed system resolution varies for different

frequencies within a signal band. The resolution is higher for lower frequencies and vice versa. In

order to determine the system resolution for a specific application, it facilitates to characterize the

system with a single frequency [37, 38]. In [37], Sayiner has argued that for a speech signal the

Saeed Mian Qaisar Grenoble INP 162

Page 181: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

LCADC effective resolution can be characterized by employing a monotone sinusoid of fsig=568

Hz frequency.

In Chapter 9, it has been shown that for a set of parameters ( a=±0.23%, Ttimer=2- 4 ! seconds,

M=6 and 1st order interpolator) the ARADC achieves 8-ENOBs. The ARADC achieves this

resolution while acquiring a monotone sinusoid of fsig=2300 Hz (cf. Section 9.3). Therefore, a

better or at least a similar performance can be achieved while dealing with fsig=568 Hz. Therefore,

the ARADC with this parameters set is employed along with the mu-law compander, for

digitizing the considered speech signal x(t).

A reference frequency Fref =16 kHz is chosen in this case, which satisfies the Nyquist sampling

criterion for x(t). Now depending upon Fsi and Fref an appropriate resampling frequency Frsi is

decided for Wi. In case when Fsi>Fref, Frsi=Fref is chosen. In the opposite case, Frsi=Fsi is chosen.

x(t) is band limited between [50; 4000] Hz. As it is acquired by employing a 6-bit resolution

AADC, so Fsmax and Fsmin become 504 kHz and 6.3 kHz respectively (cf. Equations 7.8 and 7.9).

The AADC amplitude range 2Vmax is chosen equal to 1.8 V. Hence, the AADC quantum q

becomes 0.0286 V (cf. Equation 4.24).

The relevant parts of the non-uniformly sampled signal obtained with the AADC are selected

with the ASA [44]. By following the criterion given in Section 5.1.1, Lref =2 seconds is chosen for

this case. It leads to six selected windows, shown on the middle part of Figure 10.3. The selected

windows parameters are summarised in Table 10.7.

Wi Li

(Seconds)

Fsi

(Hz)

Fref

(Hz)

Frsi

(Hz)

Ni

(Samples)

Nri

(Samples)

1st 1.29 13990 16000 13990 18047 18047

2nd 0.97 21068 16000 16000 20565 15619

3rd 0.55 14898 16000 14898 8173 8173

4th 1.35 12783 16000 12783 17257 17257

5th 0.62 18683 16000 16000 11579 9920

6th 1.42 10487 16000 10487 14881 14881

Table 10.7. Summary of the selected windows parameters.

Table 10.7, exhibit the interesting features of the ARADC, which are achieved grace of a smart

combination of the non-uniform and the uniform signal processing tools. Fsi represents the

sampling frequency adaptation by following the local variations of x(t). Here, the case, Fsi>Fref

holds for the 2nd and the 5th selected windows. On the other hand, the opposite case holds for the

remaining selected windows. The chosen Frsi shows the adaptation of the resampling frequency

for Wi. It further adds to the proposed system computational efficiency, by avoiding the

unnecessary interpolations during the data resampling process. Nri shows that how the adjustment

of Frsi avoids the unnecessary samples, delivered at the ARADC output. Li exhibits the ASA

dynamic feature, which is to correlate the window function length with the local variations of x(t).

If the sampling is performed at Fref in the classical case. Then the whole signal will be sampled at

16000 Hz, regardless of its local variations. Moreover, in the classical case, the windowing

process is not able to select only the active parts of the sampled signal. In addition, L remains

static and is not able to adapt with x(t) local variations. For the studied signal, L = 2 seconds will

Saeed Mian Qaisar Grenoble INP 163

Page 182: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

lead to six 2 seconds length windows for the total x(t) span of 12 seconds. The windowed data

obtained in the classical case is shown on the bottom part of Figure 10.3.

This static nature of the classical approach results into an increased number of samples delivered

at the classical ADC output and so an increased processing load and utilization of the system

resources compared to the ARADC.

0 2 4 6 8 10 12-1

-0.5

0

0.5

1Input Speech Signal

0 2 4 6 8 10 12-1

-0.5

0

0.5

1

Am

pli

tud

e A

xis

0 2 4 6 8 10 12-1

-0.5

0

0.5

1

Time Axis

0 2 4 6 8 10 12-1

-0.5

0

0.5

1Input Speech Signal

0 2 4 6 8 10 12-1

-0.5

0

0.5

1

Am

pli

tud

e A

xis

0 2 4 6 8 10 12-1

-0.5

0

0.5

1

Time Axis

Figure 10.3. The input speech signal (top), the selected signal obtained with the ASA (middle) and the

windowed signal obtained in the classical case (bottom).

In the classical case, a M=8-bit converter along with mu-law compander is employed for

acquiring x(t). It delivers 192000 samples. On the other hand, for the above discussed set of

parameters, the ARADC with mu-law compander delivers 83897 samples for the total x(t) span. It

shows that for the studied signal, the ARADC results into a 2.3 times compression gain compared

to the classical approach. Around 84 % of this compression gain is achieved because of the ASA

smart feature, which is to select only the relevant parts of the non-uniformly sampled signal.

Remaining 16 % gain is achieved by adapting the resampling frequency according to the input

signal local variations. Note that the considered speech signal activity is 51.6 %, which is more

than twice of the activity occurs during a conversational speech signal [141]. Therefore, while

considering a conversational speech, the ARADC compression gain over the classical one will be

greater than or at least equal to 4 times.

The above results confirm that the ARADC can achieve a drastic compression gain over the

classical one in case of low activity time-varying signals like electrocardiogram,

phonocardiogram, speech, seismic etc. This reduction in number of samples can lead the ARADC

based systems to achieve higher efficiency compared to the classical converters based solutions.

Here, the term efficiency points towards lessening the employed system resources like power,

memory, transmission rate, channel bandwidth etc.

According to Allier, the AADC with M=4 is sufficient for proper acquisition of a speech signal

[43]. Although he has not detailed the signal acquisition process, yet it is obvious that the

Saeed Mian Qaisar Grenoble INP 164

Page 183: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Part-III Chapter 10 Performance of the Proposed Techniques for Real Signals

employment of mu-law compander will enhance the speech acquisition quality. In Chapter 9, it is

discussed that for a=±0.23%, Ttimer=2 - 5 ! seconds, M=4 and a fourth order interpolator, the

ARADC can achieve more than 8-ENOB. Following it, the studied speech signal can be acquired

by employing an ARADC with M=4 along with the mu-law compander. It will further improve

the ARADC compression gain over the classical one compared to the above discussed case of

M=6. It shows that a reduction in M results into an improved compression ratio of the ARADC,

while increasing the effort to deliver per resampled data point. Therefore, an appropriate choice

of M and interpolation order should be made to assure the proper signal reconstruction along with

the maximum achievable compression gain and the processing efficiency.

10.4 Conclusion

The proposed techniques performance for the real signals has been studied. It is shown that these

techniques are well suited for acquiring, processing and analysing the low activity time varying

signals. In such cases, they can achieve a drastic computational efficiency compared to the

counter classical ones, while providing the comparable quality results.

Performances of the EARD, the EARR and the ARDI filtering techniques have been

demonstrated for a speech application. It is shown that the proposed techniques result into a more

than one order magnitude gain in terms of additions and multiplications over the classical one. It

is achieved due to the joint benefits of the AADC, the ASA and the resampling, as they allow the

online adaptation of system parameters by exploiting the input signal local variations. It

drastically reduces the total number of operations and therefore the energy consumption

compared to the classical case. It is also shown that the obtained results with the proposed

filtering techniques are of comparable quality with the one, obtained in the classical case.

Performance of the ARSTFT has been studied for a chirp-like signal. It is shown that the

ARSTFT outperforms the classical STFT. The first advantage of the ARSTFT over the STFT is

the adaptive time-frequency resolution and the second one is the computational gain. These smart

features of the ARSTFT are achieved due to the joint benefits of the non-uniform and the uniform

signal processing tools, as they enable to adapt Fsi, Frsi, Ni, Nri and wni by exploiting the local

variations of x(t). The processing quality is also measured. It is shown that the error made for the

chosen system parameters is minor. Moreover, a higher accuracy can be achieved by increasing

the AADC resolution and the interpolation order. Thus, an accuracy improvement can be

achieved at the cost of an increased processing activity.

The ARADC performance in terms of the signal acquisition has been demonstrated for a speech

signal. It is shown that the ARADC results into a 2.3 times compression gain for the studied case.

It is known that for a given input signal, the LCADC sampling frequency varies in proportion to

M [37, 38, 43]. Hence, by reducing M, an enhancement in the ARADC compression gain can be

achieved. While reducing M, two main issues should be considered. Firstly, the obtained level

crossing sampled signal should satisfy the reconstruction criterion [27, 28]. Secondly, for the

fixed ARADC parameters, reducing M calls for increasing the interpolation order. Thus, for a

targeted application, a possibility of reducing M up to a certain extent is defined by the

reconstruction criterion, which will lead to an increase in the ARADC compression ratio. But on

the other hand, it will increase the processing load per resampled observation (cf. Chapter 9).

Therefore, an appropriate choice of M should be made to assure a proper signal reconstruction

along with the maximum achievable compression gain.

Saeed Mian Qaisar Grenoble INP 165

Page 184: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Conclusion and Prospects

CONCLUSION AND PROSPECTS

Conclusion

The motivation of this PhD work has been to devise smart signal processing solutions for mobile

systems that offer a favorable trade-off between the system cost, speed, area, output quality and

especially power consumption. It can be done by smartly reorganizing the mobile systems

associated signal processing theory and architecture. In this context, the event driven signal

acquisition and processing along with the clock-less circuit design is employed. It results into the

power efficiency by smartly adapting the system processing activity in accordance to the input

signal local characteristics.

The activity acquisition and selection along with the local features extraction are the proposed

approach bases. These are formed by employing a smart combination of the non-uniform and the

uniform signal processing tools.

In the proposed solutions, the data acquisition is performed with LCADCs. LCADCs are based on

the LCSS, which makes to pilot their signal acquisition by the input signal variations. LCADCs

produce the non-uniformly time repartitioned samples, which allow an easy activity selection and

local features extraction of the input signal.

In order to analyze the level crossing sampled signals, a novel spectral analysis technique has

been devised. A comparison of the proposed technique is made with the GDFT and the Lomb’s

algorithm. It is shown that the proposed technique outperforms the GDFT and the Lomb’s

algorithm, in terms of spectral quality and processing cost. This approach is especially suitable

for signals which remain constant most of the time and vary sporadically. It is demonstrated that

for such signals, the proposed technique achieves a drastic computational gain over the classical

one, while providing comparable quality results.

Based on the proposed approach, the signal driven adaptive rate filtering techniques have been

devised. Their computational complexities are deduced and compared with the classical filtering

techniques. It shows more than one order magnitude gain of the proposed techniques over the

classical one, while providing comparable quality results. It is achieved due to the proposed

techniques time-varying nature, which leads to adapt their sampling frequency and filter order

according to the input signal local features.

For proper characterization of non-stationary signals, a time-frequency representation is required.

In this context, the ARSTFT has been devised. It is shown that the ARSTFT outperforms the

STFT. The first advantage of the ARSTFT over the STFT is the adaptive time-frequency

resolution and the second one is the computational gain. These smart features of the ARSTFT are

achieved by adapting its sampling frequency and the window function length plus shape in

accordance with the input signal characteristics.

Saeed Mian Qaisar Grenoble INP 166

Page 185: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Conclusion and Prospects

The level crossing A/D conversion, the activity selection and the resampling are the fundamental

tools of the proposed solutions. Their smart combination has been named as the ARADC. The

principle advantage of the ARADC over the classical converters is to minimize the analog

circuitry, requires for the input signal acquisition. While providing a higher resolution output by

employing some post digital processing on the level crossing sampled data. A new method to

compute the ARADC effective resolution has been devised. The correctness of the suggested

method is demonstrated by the coherence of its obtained results with the theoretical ones. The

ARADC resolution is a function of a set of parameters (Ttimer, a, M and the interpolation order).

It is shown that for a targeted application, an appropriate set of parameters should be found,

which provides an attractive trade-off between the system computational complexity and the

output quality.

The proposed techniques effectiveness for the real signals has been evaluated. It is shown that

these techniques are well suited for acquiring, processing and analysing the low activity time-

varying signals. In such cases, they can achieve a drastic computational efficiency compared to

the counter classical approaches, while delivering comparable quality results.

Prospects

A system level architecture has been proposed for the devised adaptive rate filtering techniques.

Its implementation and circuit level performance evaluation is a future task. Development and

implementation of an appropriate architecture for the ARSTFT is also a future work.

The sampling frequency of LCADCs is a function of M and the input signal variations. It shows a

way of employing non-uniformity of the level crossing sampled signal for estimating the input

signal instantaneous frequency. A possibility of realizing this idea is presented in [139]. This

approach is very interesting and provides accurate results for the simple inputs. Development of a

generalized solution is a future prospective.

The quantum q between two consecutive thresholds of uniform LCADCs is unique. In this case,

for a fixed M, the sampling frequency is piloted by the input signal slope. The non-uniformly

sampled signal obtained with LCADCs is windowed by the activity selection algorithm. The

sampling frequency Fsi for the ith selected window Wi can be specific and is given by Equations

5.9 and 5.10.

For a monotone sinusoid, Fsi is a function of the relevant number of thresholds crossed by the

sinusoid and is defined as follow.

i

sig

i NRTHfFs ..2 (11.1)

Here, NRTHi is the relevant number of thresholds for Wi and it is a function of Ai. Ai is the

sinusoidal amplitude for Wi. It is assumed that Ai remains unique for Wi.

If NRTHi is known then Ai can be calculated as follow. The first step is to compute the number of

thresholds Ri crossed by the sinusoid on either (above or below) side of the LCADC central

threshold. The LCADC central threshold is one with the same value of the input sinusoid DC

level. Ri can be computed as follow.

Saeed Mian Qaisar Grenoble INP 167

Page 186: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Conclusion and Prospects

2

1!

ii NRTH

R (11.2)

Finally, Ai is given by the following relation.

""#

$%%&

'

q

AfloorR

ii (11.3)

Where, the floor operator rounds the result towards minus infinity. The computed Ai is an

estimate of the input sinusoid amplitude for Wi and its estimation error is proportional to q.

In case, when the input sinusoid frequency varies and its amplitude is fixed to the LCADC

amplitude range 2Vmax, Fsi can be defined by Equation 11.4.

!12..2 "# Mii fFs (11.4)

Where, 2M-1 represents NRTH in this case. f i is the input sinusoid frequency for Wi. It is assumed

that f i remains unique for Wi. Note that the above Equations lead towards dual cases. In

Equations 11.1 and 11.4, Fsi can be calculated by employing Equations 5.9 and 5.10. fsig and M

are constants and therefore NRTHi and f i can be found respectively. NRTHi provides implicit

information about Ai, which can be extracted by employing Equation 11.3.

It follows that the level crossing sampling and the activity selection allow estimating f i or Ai of

the input sinusoid. It hints towards the possibility of developing frequency and amplitude

demodulation techniques. Such an approach can render a simpler demodulator circuit by

eliminating requirements such as a synchronizing mechanism at the receiver end. It opens a

domain for the future research.

The uniform resampling of the level crossing sampled data is required in the proposed solutions.

It is shown that for given M, Ttimer and a, an appropriate order interpolator should be employed

to achieve the ARADC best effective resolution. A similar performance can be achieved with a

lower order interpolator. It can be done by employing the symmetry during the interpolation

process, which results into a reduced resampling error [133, 140]. The pros and cons of this

approach are under investigation and a description on it is given in [50, 51]. Further development

of this approach is a future prospective.

In the proposed approach, LCADCs are employed for transforming the incoming signal from

analog to digital domain. A comparison of LCADCs with the sigma-delta converters has been

made. A virtue of the sigma-delta converters over LCADCs is the noise shaping phenomenon. It

results into a drastic gain in the ENOB with the increase in OSR (Over Sampling Ratio) and

modulator order. The introduction of this noise shaping property in LCADCs could be really

beneficial. A study on the pros and cons of this approach is an area of future research.

In conclusion I may say that although there is plenty still to be done and there is lot of scope for

further research but I hope that my little contribution may help those who follow.

Saeed Mian Qaisar Grenoble INP 168

Page 187: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Publications

PUBLICATIONS

CONFERENCE PUBLICATIONS

S.M. Qaisar, L. Fesquet and M. Renaudin “Spectral Analysis of a Signal Driven Sampling

Scheme”, EUSIPCO’06, September 2006.

S.M. Qaisar, L. Fesquet and M. Renaudin, “Adaptive Rate Filtering for a Signal Driven

Sampling Scheme”, ICASSP’07, pp. 1465-1468, April 2007.

S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate

Sampling and Filtering”, EUSIPCO’07, pp. 2139-2143, September 2007.

S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate

Sampling and Filtering for Low Power Embedded Systems”, SampTA’07, June 2007.

S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate

Sampling and Adaptive resolution Analysis”, Proc. WASET, vol. 31, pp. 85-90, 2008.

S.M. Qaisar, L. Fesquet and M. Renaudin, “An Improved Quality Adaptive Rate Filtering

Technique Based on the Level Crossing Sampling”, Proc. WASET, vol. 31, pp. 79-84, 2008.

S.M. Qaisar, L. Fesquet and M. Renaudin, “An Improved Quality Adaptive Rate Filtering

Technique For Time Varying Signals Based on the Level Crossing Sampling”, ICSES’08,

September 2008.

Saeed M. Qaisar, L. Fesquet and M. Renaudin, “Effective Resolution of an Adaptive Rate

ADC”, SampTA’09, May 2009.

JOURNAL PUBLICATIONS

S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Resolution

Short-Time Fourier Transform”, EURASIP, Research Letters in Signal Processing, 2008.

Saeed M. Qaisar, L. Fesquet and M. Renaudin, “A Signal Driven Adaptive Resolution Short-

Time Fourier Transform”, International Journal of Signal Processing, vol. 5, No. 3, pp. 180-

188, 2009.

Saeed M. Qaisar, L. Fesquet and M. Renaudin, “Signal Driven Sampling and Filtering A

Promising Approach For Time Varying Signals Processing”, International Journal of Signal

Processing, vol. 5, No. 3, pp. 189-197, 2009.

Saeed Mian Qaisar Grenoble INP 169

Page 188: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Publications

Saeed M. Qaisar, L. Fesquet and M. Renaudin, “Adaptive Rate Sampling and Filtering Based

on Level Crossing Sampling”, EURASIP Journal on Advances in Signal Processing, 2009.

Saeed Mian Qaisar Grenoble INP 170

Page 189: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Annex-I

ANNEX-I ASYNCHRONOUS CIRCUITS PRINCIPLE

The synchronous design is mainly based on the assumption that all circuit components share a

common and discrete notion of time, which is defined by a global clock signal, distributed throughout

the circuit. The process is depicted on Figure I.1.

Reg Reg RegCL CLData

Clock

Reg Reg RegCLCL CLCLData

Clock

Figure I.1. Basic structure of a synchronous circuit.

The asynchronous circuits are fundamentally different. There is no global clock signal; instead the

circuits use local handshaking between their modules in order to ensure the proper functioning [101,

110]. The communication protocols are implemented locally for each module. Such protocols require

a bidirectional exchange of information between senders and receivers called handshake protocols.

The process is shown on Figure I.2. The implementation of the communication protocol in each

module is the price to pay to get rid of the global clock and to control the sequencing locally [102].

Reg Reg RegCL CLData

CTL CTL CTL

Ack

Req

Ack Ack

Req Req

Reg Reg RegCLCL CLCLData

CTL CTL CTL

Ack

Req

Ack Ack

Req Req

Figure I.2. Basic structure of an asynchronous circuit.

The elimination of global clock results into the asynchronous circuits interesting properties [101, 102],

such as the low power consumption (achieved due to the fine-grain clock gating and zero standby

power consumption) [104, 105], the high operating speed (achieved because the operating speed is

determined by actual local latencies rather than the global worst case latency “critical path”) [106,

107], the lesser electromagnetic emission (achieved because the local clocks tend to tick randomly in

time) [103, 104, 108], etc.

On the other hand there are also some drawbacks of the asynchronous implementation. The

asynchronous control logic that implements the handshaking normally causes an overhead in terms of

silicon area, circuit speed and power consumption. It is therefore important to ask whether or not the

investment pays off, i.e. whether the use of asynchronous technique results in a substantial

improvement over the synchronous one or not.

Saeed Mian Qaisar Grenoble INP 171

Page 190: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Annex-I

I.1 Handshake Protocols

The communication protocol is base of the sequencing rules of asynchronous circuits [102]. It ensures

the following.

! A module starts the computation if and only if all the valid data is available.

! As far as the result can be stored, the module releases the input ports.

! A module outputs the result via output port if this port is available, i.e. released at the end of

the previous communication.

The communication protocols are mainly splitted into two main classes namely two-phase and

four-phase protocols, described as follow.

A two-phase protocol is depicted on Figure I.3. The sequencing between two asynchronous

modules is done in the following manner [95].

1st Phase (receptor): The incoming data is detected with the transition on request signal and on the

completion of processing a transition on acknowledgement signal is generated.

2nd Phase (emitter): The transition on acknowledgement signal is detected and depending upon

availability the new data is transmitted.

In this case, the information on the request and the acknowledge wires is encoded as signal

transitions on wires and there is no difference between a 0 1 and a 1 0 transition, they both

represent a signal event. Due to this fact the two phase signaling is also known as the event-based

one.

REQ

ACK

DATA

REQ

ACK

DATA

Figure I.3. A two-phase protocol.

A four-phase protocol is shown in Figure I.4. The sequencing between two asynchronous modules

is done in the following manner [95].

1st Phase (emitter): The valid data is transmitted and the request signal is set to high.

2nd Phase (receptor): The incoming data is detected with the level of request signal and on the

completion of processing the acknowledgement signal is set to high.

3rd Phase (emitter): The acknowledgement signal is detected and in response the request signal is

reset to low.

4th Phase (receptor): The reset of request signal is detected and in response the acknowledgement

signal is reset to low.

Saeed Mian Qaisar Grenoble INP 172

Page 191: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Annex-I

REQ

ACK

DATA

REQ

ACK

DATA

Figure I.4. A four-phase protocol.

There are different ways of realizing the two-phase and the four-phase protocols, which lead

towards the following solution space [101].

{Two-phase, Four-phase} x {Bundled-data, Dual-rail, …} x {Push, Pull}

The four-phase protocol requires a superfluous return-to-zero transitions that cost unnecessary

time and energy. The two-phase protocol avoids this overhead. From this fact it appears that the

two-phase protocols are faster and less power consuming than the four-phase ones. But often the

implementation of event-based logic is more complex and costly than level-based logic.

Moreover, very efficient optimizations can be done at the logic and architectural levels when using

four-phase protocols [102]. Hence, there is no general answer that which protocol is the best one

and an appropriate choice should be made for the targeted application.

I.2 Muller C-Element

In a synchronous circuit the role of clock is to define the time instants when signals are stable and

valid. In between these instants signals may exhibit hazards and may make multiple transitions as the

combinational circuits stabilize, it does not matter from a functional point of view [101].

In an asynchronous circuit this task is achieved with the local synchronization between different

circuit modules (cf. Figure 4.6). The Muller gate is the fundamental component for realizing this

synchronization in the asynchronous circuits [109]. In fact it is a state holding element and works on

the following principle.

If the inputs are equal it passes the same value to output, otherwise, it memorizes the previous value.

The symbol and truth table of a two input Muller gate is shown in Figure I.5.

z-110

000

111

z-101

zyx

z-110

000

111

z-101

zyx

C

y

x

zC

y

x

z

Figure I.5. Two input Muller gate symbol (left) and its truth table (right).

The synchronization property of Muller gate is clear from Figure 4.9. It waits until all input transitions

be completed before making a transition on the output.

Saeed Mian Qaisar Grenoble INP 173

Page 192: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Annex-I

I.3 Asynchronous Circuits Classes

The local handshaking phenomenon in the asynchronous circuits may results into an interesting

property of delay insensitivity. Delay insensitivity means that the functional correctness of the circuit

does not depend on the delays of its modules [102].

According to the temporal hypothesis Delays can be modeled as follow. The Fix delay, with a unique

value, the bounded delay, with a value within the defined interval and the unbounded delay, with finite

and positive but unknown value.

Depending upon the delay assumptions the asynchronous circuits are mainly splitted as delay

insensitive (DI), quasi delay insensitive (QDI), speed independent (SI), micro pipeline, etc.

In [110], Mayers has classified different asynchronous circuits on the basis of temporal hypothesis and

their complexity and robustness. The classification summary is shown on Figure I.6.

Huffman Circuits

Micro-pipeline Circuits

Speed Independent Circuits

Quasi Delay Insensitive Circuits

Delay Insensitive Circuits

Huffman Circuits

Micro-pipeline Circuits

Speed Independent Circuits

Quasi Delay Insensitive Circuits

Delay Insensitive Circuits

Temporal

Hypothesis

Robustness

&

Complexity

Figure I.6. Asynchronous Circuits Classification.

Figure I.6, shows that more the circuit is delay insensitive more it is robust and complex and vice

versa. It is very difficult to conclude that which asynchronous circuit style is the most performing.

Moreover, it is possible to employ the different circuit classes as useful abstractions that can be used at

different levels of circuit design. An example is the Amulet processor [111]. In this case, SI design is

used for local asynchronous controllers, bundled-data for local data processing, and DI is used for

high-level composition [101]. The tactful choice of handshake protocol and circuit implementation

style is among the keys to optimize an asynchronous digital system.

Saeed Mian Qaisar Grenoble INP 174

Page 193: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

BIBLIOGRAPHY

[1] E.C. Ifeachor and B.W. Jervis, “Digital Signal Processing: A Practical Approach”, Prentice-Hall, 2001.

[2] A.V. Oppenheim and R.W. Schafer, “Digital Signal Processing”, Prentice-Hall, 1975.

[3] C.E. Shannon, “Communication in the Presence of Noise”, Proc. IRE, vol.37, pp.10-21, 1949.

[4] F.J. Beutler, “Alia-Free Randomely Timed Sampling of Stochastic Process”, IEEE, Trans. Info. Theory, vol. 16, pp. 147-

152, 1970.

[5] A.J. Jerri, “The Shannon Sampling Theorem___ Its Various Extensions and Applications: A Tutorial Review”, Proc. IEEE,

vol. 65, pp. 1565-1596, 1977.

[6] F. Marvasti, “A Unified Approach to Zero-Crossing and Nonuniform Sampling”, Oak Park, Illinois: Nonuniform

Publications, 1987.

[7] I. Bilinskis and A. Mikelsons, “Randomized Signal Processing”, Cambridge, Prentice Hall, 1992.

[8] I. Bilinskis, “Digital alias free signal processing”, John Wiley and Sons, 2007.

[9] M. Unser, “Sampling___ 50 Years After Shannon”, Proc. IEEE, vol. 88, No. 4, pp. 569-587, April 2000.

[10] R.J. Martin, “Irregularly Sampled Signals: Theories and Techniques for Analysis”, PhD dissertation, University College

London, 1998.

[11] L. Fontaine, “Traitement des Signaux à Echantillonnage Irrégulier. Application au Suivi Temporel de Paramètres

Cardiaques”, PhD dissertation, Institute National Polytechnique de Lorraine, 1999.

[12] J.J. Wojtiuk, “Randomized Sampling for Radio Design”, PhD dissertation, University of South Australia, 2000.

[13] F. Aeschlimann, “Traitement du Signal Echantillonné Non Uniformément: Algorithme et Architecture”, PhD

dissertation, Institute National Polytechnique de Grenoble, 2006.

[14] F.J. Beutler and O.A.Z. Leneman, “The Theory of Stationary-Point Process”, Acta Math, vol. 116, pp. 159-197, 1966.

[15] F.J. Beutler and O.A.Z. Leneman, “Random Sampling of Random Process: Stationary Point Process”, Information and

Control, vol. 9, pp. 325-344, 1966.

[16] F.J. Beutler and O.A.Z. Leneman, “The Spectral Analysis of Impulse Process”, Information and Control, vol. 12, pp.

236-258, 1968.

[17] O.A.Z. Leneman, “Random Sampling of Random Process: Impulse Process”, Information and Control, vol. 9, pp. 347-

363, 1966.

[18] A.V. Balakrishnan, “On the Problem of Time-Jitter in Sampling”, IRE Trans. Info. Theory, vol. 8, pp. 226-236, 1962.

[19] H.S. Shapiro and R.A. Silverman, “Alias Free Sampling of Random Noise”, SIAM, Journal of Appl. Math, vol. 16, pp.

225-236, 1960.

[20]. P.C. Bagshaw, M. Sarhadi, “Analysis of Samples of Wideband Signals Taken at Irregular, Sub-Nyquist Intervals”,

Electronics Letters. vol. 27, pp. 1228-1230, 1991

[21] M.A. Nazario and C. Saloma, “Signal Recovery in Sinusoid-Crossing Sampling by use of the Minimum Negative

Constraint”, Applied Optics, vol. 37, 1998.

[22] M. Litong and C. Saloma, “Detection of Subthreshold Oscillations in a Sinusoid-Crossing Sampling”, Phys. Review,

vol. 57, pp. 3579-3588, 1998.

Saeed Mian Qaisar Grenoble INP 175

Page 194: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

[23] G. Tapang and C. Saloma, “Dynamic Range Enhancement of Optimized 1-bit A/D Converter”, IEEE Trans. Circ. Sys.

II: Analog and Digital Signal Processing, vol. 49, pp. 42-47, 2002.

[24] F.E. Bond and C.R. Chan, “On Sampling the Zeros of Bandlimited Signals”, IRE Trans. Info. Theory, vol. 4, pp. 110-

113, 1958.

[25] P.H. Ellis, “Extension of Phase Plane Analysis to Quantized Systems”, IRE transactions on Automatic Control, vol.AC

(4), pp. 43-59, 1959.

[26] J.W. Mark and T.D. Todd, “A Nonuniform Sampling Approach to Data Compression”, IEEE Transactions on

Communications, vol. COM-29, pp. 24-32, January 1981.

[27] F.J. Beutler, “Error Free Recovery from Irregularly Spaced Samples”, SIAM Review, vol. 8, pp. 328-335, 1996.

[28] F. Marvasti, “Nonuniform Sampling Theory and Practice”, Kluwer academic/Plenum Publisher, New York, 2001.

[29] A. Zakhor and A.V. Oppenheim, “Reconstruction of Two-Dimensional Signals from Level Crossings”, Proc. of IEEE,

vol. 78, no. 1, pp. 31-55, January 1990.

[30] M. Lim and C. Saloma, “Direct Signal Recovery from Threshold Crossings”, Phys. Rev. E 58, pp. 6759-6765, 1998.

[31] M. Miskowicz, “Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion”, Sensors,

vol. 7, pp. 16-37, 2007.

[32] I. Bilinskis and A. Mikelsons, “Application of Randomized or Irregular Sampling as an Antialiasing Technique”,

Elsevier, Signal Processing Theories and Application, pp. 505-508, 1990.

[33] K.J. Astrom and B. Bernhardsson, “Comparison of Periodic and Event Based Sampling for First-Order Stochastic

Systems”, Proc. of IFAC World Congress-99, pp. 301-306, 1999.

[34] M. Miskowicz, “Send-on-Delta Concept: An Event-Based Data Reporting Strategy”, Sensors, vol. 6, pp. 49-63, 2006.

[35] P. Otanez, J. Moyne and D. Tilbury, “Using Deadbands to Reduce Communication in Networked Control Systems”,

Proc. of American control conference’02, pp. 3015-3020, 2002.

[36] S.C. Gupta, “Increasing the Sampling Efficiency for a Control System”, IEEE transactions on automatic and control, pp.

263-264, 1963.

[37] N. Sayiner, “A level-Crossing Sampling Scheme for A/D Conversion”, PhD dissertation, University of Pennsylvania,

1994.

[38] N. Sayiner, H.V. Sorensen and T.R. Viswanathan, “A Level-Crossing Sampling Scheme for A/D Conversion”, IEEE

Transactions on Circuits and Systems, vol. 43, pp. 335-339, April 1996.

[39] K. M. Guan and A.C. Singer. “Opportunistic Sampling by Level-Crossing”, ICASSP’07, pp. 1513-1516, April 2007.

[40] M. Greitans, “Time-Frequency Representation Based Chirp Like Signal Analysis Using Multiple Level Crossings”,

EUSIPCO’07, pp. 2154-2158, September 2007.

[41] R.N. Bracewell, “The Fourier Transform and its Applications”, Boston, McGraw-Hill, 2000.

[42] S.C. Sekhar and T.V. Sreenivas, “Auditory Motivated Level-Crossing Approach to Instantaneous Frequency

Estimation”, IEEE Trans. On Signal Processing, vol. 53, pp. 1450-1462, 2005.

[43] E. Allier, G. Sicard, L. Fesquet and M. Renaudin, “A New Class of Asynchronous A/D Converters Based on Time

Quantization”, ASYNC’03, pp. 197-205, May 2003.

[44] S.M. Qaisar, L. Fesquet and M. Renaudin “Spectral Analysis of a Signal Driven Sampling Scheme”, EUSIPCO’06,

September 2006.

[45] S.M. Qaisar, L. Fesquet and M. Renaudin, “Adaptive Rate Filtering for a Signal Driven Sampling Scheme”,

ICASSP’07, pp. 1465-1468, April 2007.

[46] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate Sampling and Filtering”,

EUSIPCO’07, pp. 2139-2143, September 2007.

Saeed Mian Qaisar Grenoble INP 176

Page 195: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

[47] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate Sampling and Filtering for Low

Power Embedded Systems”, SampTA’07, June 2007.

[48] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Resolution Short-Time Fourier

Transform”, EURASIP, Research Letters in Signal Processing, 2008.

[49] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate Sampling and Adaptive resolution

Analysis”, Proc. WASET, vol. 31, pp. 85-90, 2008.

[50] S.M. Qaisar, L. Fesquet and M. Renaudin, “An Improved Quality Adaptive Rate Filtering Technique Based on the Level

Crossing Sampling”, Proc. WASET, vol. 31, pp. 79-84, 2008.

[51] S.M. Qaisar, L. Fesquet and M. Renaudin, “An Improved Quality Adaptive Rate Filtering Technique For Time Varying

Signals Based on the Level Crossing Sampling”, ICSES’08, September 2008.

[52] F. Aeschlimann, E. Allier, L. Fesquet and M. Renaudin, “Asynchronous FIR Filters, Towards a New Digital Processing

Chain”, ASYNC’04, pp. 198-206, April 2004.

[53] F. Aeschlimann, E. Allier, L. Fesquet and M. Renaudin, “Spectral Analysis of Level Crossing Sampling Scheme”,

SampTA’05, July 2005.

[54] R.E.A.C Paley and N. Wiener, “Fourier Transform in the Complex Domain”, Amer, Math. Soc. Coll. Publ, vol. 19,

1934.

[55] N. Levinson, “Gap and Density Theorems”, Amer, Math. Soc. Coll. Publ, vol. 26, New York, 1940.

[56] M. Ben-Romdhane, P. Desgeys et al., “Non-Uniform Sampling Schemes for IF Sampling Radio Receiver”, DTIS’06, pp.

15-20, September 2006.

[57] I.F Blake and W.C. Lindsey, “Level-Crossing Problems for Random Processes”, IEEE Transactions on Information

Theory, pp. 295-315, 1973.

[58] M. Miskowicz, “Efficiency of Level-Crossing Sampling for Bandlimited Gaussian Random Process”, Proc. of IEEE

International Workshop on Factory Communication Systems-06, pp. 137-142, June 2006.

[59] A.V. Openheim and R.W. Schafer, “Discrete-Time Signal Processing”, Prentice-Hall, Second Edition, New Jersey,

ISBN 0-13-083443-2, 1999.

[60] W. Kester, “Data conversion handbook”, Elsevier/Newnes, ISBN 0-7506-7841-0, 2005.

[61] R.H. Walden, “Analog-to-Digital converter survey and analysis”, IEEE Journal on Selected Areas in Communications,

vol. 17, No. 4, pp. 539-550, April 1999.

[62] W.R. Bennett, “Spectra of Quantized Signals,” Bell System Technical Journal, vol. 27, p. 446-471, July 1948.

[63] B.M. Oliver, J.R. Pierce and C.E. Shannon, “The Philosophy of PCM”, Proceedings IRE, vol. 36, pp. 1324-1331,

November 1948.

[64] W.R. Bennett, “Noise in PCM Systems”, Bell Labs Record, vol. 26, p. 495-499, December 1948.

[65] H.S. Black and J.O. Edson, “Pulse Code Modulation”, AIEE Transactions, vol. 66, pp. 895-899, 1947.

[66] H.S. Black, “Pulse Code Modulation”, Bell Labs Record, vol. 25, pp. 265-269, July 1947.

[67] K.W. Cattermole, “Principles of Pulse Code Modulation”, Elsevier, ISBN 444-19747-8, New York, 1969.

[68] D.A. Johns and K. Martin, “Analog Integrated Circuit Design”, John Wiley & Sons, Canada, 1997.

[69] P.G.A. Jespers, “Integrated Converters, D to A and A to D Architectures, Analysis and Simulations”, Oxford University

Press, 2001.

[70] Y. Gendai et al. “An 8-b 500-MHz Flash ADC”, IEEE Int. Solid State Circuit Conf. pp. 172-173, San Francisco,

February 1991.

[71] C. Donovan and M.P. Flynn “A Digital 6-bit ADC in 0.25- m CMOS”, IEEE Journal of Solid State Circuit, pp. 432-

437, vol. 37, San Francisco, March, 2002.

Saeed Mian Qaisar Grenoble INP 177

Page 196: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

[72] P. Scholtens and M. Vertregt “A 6-bit 1.6 GSamples/s Flash ADC in 0.18 m CMOS Using Average Termination”,

IEEE Int. Solid State Circuit Conf., San Francisco, February 2002.

[73] B.P Ginsburg and A.P. Ghandrakasan, “Dual Scalable 500MS/s, 5b Time-Interleaved SAR ADCs for UWB

Applications”, IEEE Custom Integrated Circuit Conference, pp. 403- 406, September 2005.

[74] B.P Ginsburg and A.P. Ghandrakasan, “Dual Time-Interleaved Successive Approximation Register ADCs for an Ultra-

Wideband Receiver”, IEEE Journal of Solid State Circuit, pp. 247-257, vol. 24, Februarys 2007.

[75] P.H. Le, J. Singh et al. “Ultra-Low-Power Variable-Resolution Successive Approximation ADC for Biomedical

Application”, IEEE Electronics Letters, vol. 41, pp. 634- 635, May 2005.

[76] W.C. Goeke, “Continuously Integrating High-Resolution Analog-to-Digital Converter”, United States Patent # 5117227,

May 1992.

[77] T. Fusayasu, “A Fast Integrating ADC Using Precise Time-to-Digital conversion”, IEEE Nuclear Science Symposium,

pp. 302-304, Honolulu, Hawaii, USA, 2007.

[78] P.M. Figueiredo et al., “A 90nm CMOS 1.2v 6b 1GS/s Two-Step Subranging ADC”, IEEE Int. Solid State Circuit Conf.,

pp. 2320- 2329, 2006.

[79] D.J. Huber et al., “A 10b 160MS/s 84mW 1V Subranging ADC in 90nm CMOS”, IEEE Int. Solid State Circuit Conf.,

2007.

[80] L. Zhen et al., “Low-Power CMOS Folding and Interpolating ADC with a Fully-Folding Technique”, ASICON’07, pp.

265-268, 2007.

[81] X. Zhu et al., “An 8-b 1-GSmaples/s CMOS Cascaded Folding and Interpolating ADC”, EDST’07, pp. 177-180, 2007.

[82] L. Jipeng et al., “A 0.9-V 12-mW 5-MSPS Algorithmic ADC with 77-dB SFDR”, IEEE Journal of Solid State Circuits,

vol. 40, pp. 960-969, 2005.

[83] B. Esperanca et al., “Power-and-Area Efficient 14-bit 1.5 MSample/s Two-Stage Algorithmic ADC Based on a

Mismatch-Insensitive MDAC”, ISCAS’08, pp. 220-223, 2008.

[84] B. Murmann et al., “A 12-bit 75-MS/s Pipelined ADC Using Open-Loop Residue Amplification”, IEEE Journal of Solid

State Circuits, vol. 38, pp. 2040-2050, 2005.

[85] K. Gulati et al., “A Highly-Integrated CMOS Analog Baseband Transceiver with 180MSPS 13b Pipelined CMOS ADC

and Dual 12b DACs”, Customs Integrated Circuits Conference, pp. 515-518, Acton, MA, USA, 2005.

[86] L. Williams, “Modeling and Design of High Resolution Sigma-Delta Modulators”, PhD dissertation, Stanford

University, 1993.

[87] D. Welland et al. “A Stereo 16-Bit Delta-Sigma A/D Converter for Digital Audio”, Journal of the Audio Engineering

Society, vol. 37, pp. 476-485, June 1989.

[88] P.M. Aziz et al. “An Overview of Sigma-Delta Converters: How a 1-bit ADC Achieves more than 16-bit Resolution”,

IEEE Signal Processing Magazine, vol. 13, pp. 61-84, 1996.

[89] J.C. Candy and G.C. Temes, “Oversampling methods for A/D and D/A conversion” in Oversampling Delta-Sigma Data

Converters, pp. 1-25, IEEE Press, 1992.

[90] B. Leung, “Theory of Sigma-Delta Analog to Digital Converter,” IEEE International Symposium on Circuits and

Systems, Tutorial, pp. 196-223, 1994.

[91] A. R. Feldman, “High-Speed, Low-Power Sigma-Delta Modulators for RF Baseband Channel Applications,” PhD

dissertation, University of California, Berkeley, 1998.

[92] J. Candy, “A Use of Double Integration in Sigma Delta Modulation,” IEEE Transactions on Communications, pp. 249-

258, March 1985.

[93] Y. Matsuya et al., “A 16-bit Oversampling A/D Conversion Technology Using Triple Integration Noise Shaping,” IEEE

Journal of Solid State Circuits, vol. 22, pp. 921-929, December, 1987.

[94] W. Chou and R.M. Gray, “Dithering and its Effects on Sigma-Delta and Multi Stage Sigma-Delta Modulations”

Proceeding of the International Symposium on Circuits and Systems, pp. 368-371, May, 1990.

Saeed Mian Qaisar Grenoble INP 178

Page 197: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

[95] E. Allier, “Interface Analogique Numerique Asynchrone : Une Nouvelle Classe de Convertisseurs Bases Sur la

Quantification du Temps”, PhD dissertation, Institute National Polytechnique de Grenoble, 2003.

[96] S.C. Sekhar and T.V. Sreenivas, “Adaptive Window Zero-Crossing Based Instantaneous Frequency Estimation”,

EURASIP Journal on Applied Signal Processing, pp.1791-1806, January 2004.

[97] F. Akopyan, R. Manohar and A.B. Aspel, “A Level-Crossing Flash Analog-to-Digital Converter”, ASYNC’06, pp.12-

22, March 2006.

[98] A. Baums, U. Grunde and M. Greitans, “Level-Crossing Sampling Using Microprocessor Based System”, ICSES’08, pp.

19-22, Krakow, Poland, September 2008.

[99] T. Nguyen, “Deterministic Analysis of Oversampled A/D Conversion and Sigma-Delta Modulation and Decoding

Improvements Using Consistent Estimates”, PhD dissertation, Columbia University, 1993.

[100] M. Charbit, “Elements de Théorie du Signal : les Signaux Aléatoire” Ellipses, 1990.

[101] J. Sparso and S. Furber, “Principles of Asynchronous Circuit Design A System Perspective” Springer, 2001.

[102] M. Renaudin, “Asynchronous Circuits and Systems: a Promising Design Alternative”, Journal of Microelectronic

Engineering, vol. 54, pp. 133-149, 2000.

[103] N.C. Paver et al. “Low-Power, Low-Noise Configurable Self-Timed DSP”, Proc. of International Symposium on

Advanced Research in Asynchronous Circuits and Systems, pp. 32–42, 1998.

[104] K.V. Berkel, R. Burgess, J. Kessels et al., “Asynchronous Circuits for Low Power: a DCC Error Corrector”, IEEE

Design and Test, pp. 22-32, 1994.

[105] K.V. Berkel, R. Burgess, J. Kessels et al., “A Single-Rail Re-Implementation of a DCC Error Detector Using a Generic

Standard-Cell Library”, 2nd Working Conference on Asynchronous Design Methodologies, pp. 72-79, May 30-31, London,

1995.

[106] T.E. Williams and M.A. Horowitz, “A Zero-Overhead Self-Timed 160 ns. 54 bit CMOS Divider”, IEEE Journal of

Solid State Circuits, vol. 26, pp. 1651-1661, 1991.

[107] T.E. Williams, N. Patkar and G. Shen, “SPARC64: A 64-b 64-Active instruction Out-of-Order-Execution MCM

processor”, IEEE Journal of Solid State Circuits, vol. 30, pp. 1215-1226, November 1995.

[108] D. Panyaska, “Réduction des Emissions Electromagnétiques des Circuits Intégrés : L’alternative Asynchrone”, PhD

dissertation, Institute National Polytechnique de Grenoble, 2004.

[109] R.E. Miller, “Switching Theory: Sequential Circuits and Machines”, John Wiely & Sons, vol. 2, 1965.

[110] C.J. Mayers, “Asynchronous Circuit Design”, John Wiely & Sons, 2001.

[111] J.D. Garside et al., “AMULET3i – an Asynchronous System-on-Chip”, Proc. of IEEE International Symposium on

Advanced Research in Asynchronous Circuits and Systems, pp. 162–175, April 2000.

[112] L.R. Rabiner and R.W. Schafer, “Digital Processing of Speech Signals”, Prentice Hall Inc., Englewood Cliffs, New

Jersey, 1978.

[113] B. Kedem, “Time Series Analysis by Higher Order Crossings”, IEEE Press, 1994.

[114] Yu Hen Hu, “Programmable Digital Signal Processors: Architecture, Programming and Application”, Marcel Dekker

Inc., USA, 2002.

[115] R.W. Robert, “The FFT Fundamentals and Concepts”, Prentice-Hall, New Jersey, 1998.

[116] L. Grafakos, “Classical and Modern Fourier Analysis”, Prentice-Hall, 2004.

[117] “Windowing: Optimizing FFTs Using Window Functions”, National Instruments, Tutorial, 2008.

[118] D. Gabor, “Theory of Communication”, Journal of IEE, vol. 93(3), pp. 429-457, 1946.

[119] N.R. Lomb, “Least-Squares Frequency Analysis of Unequally Spaced Data”, Astrophysics and Space Science, vol. 39,

pp. 447-462, 1976.

Saeed Mian Qaisar Grenoble INP 179

Page 198: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

[120] W.H. Press et al., “Spectral Analysis of Unevenly Sampled Data”, Numerical Recipes in C++, 2nd Edition, Cambridge

University Press, 2002.

[121] S. de Waele and P.M.T.Broersen, “Time Domain Error Measures for Resampled Irregular Data”, IEEE Transactions on

Instrumentation and Measurements, pp.751-756, Italy, May 1999.

[122] S. de Waele and P.M.T.Broersen, “Error Measures for Resampled Irregular Data”, IEEE Transactions on

Instrumentation and Measurements, vol. 49, No. 2, April 2000.

[123] S.W. Smith, “Scientists and Engineers Guide to Digital Signal Processing”, 2nd Edition, California Technical

Publishing, 1999.

[124] F.J.M. Barning, “The Numerical Analysis of the Light-Curve of 12 Lacerate”, Bulletin of the Astronomical Institute of

Netherlands, pp. 22-28, 1963.

[125] P. Vanicek, “Further Development and Properties of the Spectral Analysis by Least-Squares Fit”, Astrophysical and

Space Sciences, pp. 10-33, 1971.

[126] J.D Scargle, “Statistical Aspects of Spectral Analysis of Unevenly Spaced Data”, Astrophysical Journal, vol. 263, pp.

835- 853, 1982.

[127] F. Harris, “Multirate Signal Processing in Communication Systems”, EUSIPCO’07, September 2007.

[128] M. Vetterli, “A theory of Multirate Filter Banks”, IEEE Transaction on Acoustic, Speech and Signal Processing, vol.

35, pp. 356-372, March 1987.

[129] S. Chu and C.S. Burrus, “Multirate Filter Designs Using Comb Filters”, IEEE Transaction on Circuits and Systems,

vol. 31, pp. 913-924, November 1984.

[130] Z. Duan, J. Zhang, C. Zhang and E. Mosca, “A Simple Design Method of Reduced-Order Filters and its Application to

Multirate Filter Bank Design”, Elsevier Journal of Signal Processing, vol. 86, pp. 1061-1075, 2006.

[131] J.E. Purcell, “Multirate Filter Design: An Introdction”, Multimedia System Design Magazine, pp. 32-40, 1998.

[132] P.P. Vaidyanathan, “Multirate Systems and Filter Banks”, Prentice-Hall, Englewood Cliffs, New Jersey, 1993.

[133] D.M. Klamer and E.Marsy, “Polynomial Interpolation of Randomly Sampled Band-Limited Functions and Processes”,

SIAM Review, vol. 42, No. 5, pp. 1004-1019, October 1982.

[134] A.V. Openheim, A.S. Willsky and I.T. Young,“Signals and Systems”, Prentice-Hall, 1995.

[135] R. Polikar, “The Engineer’s Ultimate Guide to Wavelet Analysis”, Rowan University, College of Engineering,

retrieved in June 2006.

[136] M. Vetterli et al., “Wavelets and Filter Banks: Theory and Design”, IEEE Transactions on Signal Processing, vol. 40,

pp. 2207-2232, 1992.

[137] Ingrid Daubechies, “Ten Lectures on Wavelets”, Society for Industrial and Applied Mathematics, United States, June

1992.

[138] B. C. Baker, “What does the ADC SNR Mean”, Microchip Technology Inc., Technical Note, May 2004.

[139] R. Shavelis and M. Greitans, “Spline-Based Signal Reconstruction Algorithm from Multiple Level Crossing Samples”,

SampTA’07, June 2007.

[140] F. B. Hildebrand, “Introduction to Numerical Analysis”, McGraw-Hill, 1956.

[141] P.G.Fontolliet, “Systèmes de Télécommunications” Dunod, 1983.

[142] T.F Quatieri, “Discrete-Time Speech Signal Processing: Principles and Practice”, Prentice-Hall Signal Processing

Series, 2001.

[143] G. Porwal, H.A. Patil and T.K, Basu, “Effect of Speech Coding on Text-Independent Speaker Identification”,

ICISIP’05, pp. 415-420, January 2005.

Saeed Mian Qaisar Grenoble INP 180

Page 199: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Bibliography

Saeed Mian Qaisar Grenoble INP 181

[144] C.W. Brokish and M. Lewis, “A-Law and mu-Law Companding Implementations Using the TMS320C54X”, Texas

Instruments, Application Note: SPRA163A.

[145] Saeed M. Qaisar, L. Fesquet and M. Renaudin, “A Signal Driven Adaptive Resolution Short-Time Fourier Transform”,

International Journal of Signal Processing, vol. 5, No. 3, pp. 180-188, 2009.

[146] Saeed M. Qaisar, L. Fesquet and M. Renaudin, “Signal Driven Sampling and Filtering A Promising Approach For

Time Varying Signals Processing”, International Journal of Signal Processing, vol. 5, No. 3, pp. 189-197, 2009.

[147] R. N. Czerwinski and D. L. Jones, “Adaptive Short-Time Fourier Analysis”, IEEE Signal Processing Letters, vol. 4,

No. 2, February 1997.

[148] H. K. Kwok and D. L. Jones, “Improved Instantaneous Frequency Estimation Using an Adaptive Short-Time Fourier

Transform”, IEEE Transactions on Signal Processing, vol. 48, No. 10, October 2000.

[149] I. Djurovic and L. Stankovic, “Adaptive Windowed Fourier Transform”, Elsevier Signal Processing, vol. 83,

pp. 91-100, 2003.

Page 200: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Saeed Mian Qaisar Grenoble INP 182

Page 201: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Echantillonnage et Traitement

Conditionnes par le Signal : Une Approche

Prometteuse Pour des Traitements Efficace

à Pas Adaptatifs.

par

Saeed MIAN QAISAR

Page 202: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

RESUME ETENDU EN FRANÇAIS

Contexte de l'étude

La récente modernisation des systèmes mobiles est en train de devenir des éléments essentiels de nos vies. L'objectif de fournir des meilleurs services pour les utilisateurs exigeant les nouvelles sophistications dans le domaine des systèmes mobiles. La phase de réalisation de ces objectifs exige de plus en plus de ressources de traitement. Bien que répondant à des exigences croissantes tel que le maintien de la taille, le coût, le bruit du traitement, les émissions électromagnétiques et en particulier la consommation d'énergie –car ils sont le plus souvent alimentés par des batteries– des systèmes mobiles deviennent les défis difficiles pour l'industrie. La plupart des efforts pour atteindre ces objectifs sont axés sur l'amélioration de la conception de systèmes embarqués, l'avancement technologique et de la technologie de batterie mais très peu d'études visent à exploiter la nature non-stationnaire du signal d'entrée. Le travail proposé est une contribution dans le développement des systèmes mobiles intelligents. Il vise à atteindre les systèmes efficaces par l'adaptation intelligente de leurs paramètres selon les caractéristiques locales du signal d'entrée. Cet objectif peut être atteint par la réorganisation intelligente de la théorie de traitement du signal et l'architecture associée à des systèmes mobiles. L'idée est d'utiliser le traitement conditionné par le signal d’entrée avec la conception des circuits sans horloge. C'est en vue de réduire l'activité du traitement et de consommation d'énergie du système. Presque tous les signaux naturels comme, la parole, les signaux sismiques et biomédicaux sont non stationnaires. En outre, les signaux artificiels comme Doppler, ASK (Amplitude Shift Keying), FSK (Frequency Shift Keying) sont également présents dans la même catégorie. Le contenu spectral de ces signaux varie avec le temps, qui est une conséquence directe du processus de génération du signal [80]. Les systèmes classiques sont basés sur les architectures de Nyquist. Ils n'exploitent pas les variations locales du signal. En effet, ils acquièrent et traitent le signal à un taux fixe sans tenir compte de la nature intrinsèque du signal d'entrée. De plus, ils sont très limités en raison de la théorie de Shannon spécialement dans le cas des signaux sporadiques avec une faible activité comme l'électrocardiogramme, phonocardiogramme, sismiques etc. L'efficacité énergétique du systeme peut être améliorée en adaptant intelligemment la charge de traitement selon les variations locales du signal. À cette fin, un schéma d'échantillonnage conditionné par le signal, qui est fondée sur le «level-crossing» est employé. Le LCSS (Level Crossing Sampling Scheme) [71] adapte le taux d'échantillonnage en suivant les caractéristiques locales du signal d'entrée [79, 80]. Par conséquent, elle réduit de manière significative l'activité de la chaîne de post-traitement, car elle capte seulement les informations pertinentes [58-67]. Dans ce contexte, LCADCs (Convertisseurs analogique-numérique basé sur la LCSS) ont été développés [52-57]. Dans [52-97], les auteurs ont montré les avantages des LCADCs sur les convertisseurs classiques. Les principaux avantages sont la réduction de l'activité, l'économie d'énergie, la réduction des émissions électromagnétiques et la réduction du bruit de traitement. Inspirés de ces fonctionnalités intéressantes, LCADCs sont utilisé pour la conversion analogique numérique du signal d’entrée dans le cas proposé.

Saeed Mian Qaisar Grenoble INP 183

Page 203: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Les données obtenues avec le LCADC sont non-uniformément espacées dans le temps; donc ils ne peuvent pas être traitées ou analysées en employant les techniques classiques [1, 2]. Courant ces dernières années, plusieurs études précieuses ont été faites sur le traitement et l'analyse du signal échantillonné non-uniformément, obtenu avec LCADCs, quelques exemples sont [13, 80, 83, 84]. Il montre que la sortie du LCADC peut être utilisée directement pour plus de traitement numérique non-uniforme. Toutefois, dans cette thèse de doctorat, les non-uniformités du processus d’échantillonnage, qui donne des informations sur les caractéristiques locales du signal, sont utilisées pour sélectionner uniquement les parties actives du signal. En outre, les caractéristiques de chaque partie sélectionnée du signal sont analysées et sont ensuite employées pour adaptée les paramètres du système. Ce processus de sélection et d'extraction des caractéristiques locales est désigné comme l'ASA (Activity Selection Algorithm) [58, 62]. Le signal sélectionné obtenu avec l'ASA est re-échantillonnée uniformément avant de procéder vers les étapes des traitements ou des analyses. Le re-échantillonnage agit comme un pont entre les outils non-uniforme et uniforme de traitement du signal. Il permet d’utiliser des caractéristiques intéressantes des deux côtés dans les solutions proposées. Le LCADC, l’ASA et le ré-échantillonnage sont des éléments fondamentaux des solutions proposées. Ils forment conjointement la base de l'approche proposée, qui fait l’acquisition et la sélection de l'activité ainsi que l'extraction des paramètres locaux. Sur la base de l'approche proposée les outils intelligents pour le traitement et l’analyse du signal sont développés [58-67]. Les solutions proposées sont de nature non stationnaires et ils adaptent leur charge de traitement et les paramètres du système comme la résolution effective, la fréquence d'échantillonnage, la résolution temps-fréquence, etc. en suivant les caractéristiques du signal d'entrée. Il est réalisé en employant un mélange intelligent des outils non uniformes et uniformes des traitements du signal. Ce mélange promet un gain de calcul drastique des solutions proposées, tout en fournissant des résultats de qualité appropriée par rapport aux approches classiques.

L’Organisation Ce mémoire de thèse est principalement découpé en trois parties. La première partie est désignée comme l’acquisition non-uniforme du signal, et elle se compose des chapitres 2 à 4. La deuxième partie est désignée comme les techniques proposées, et elle contient des chapitres 5 à 8. La troisième partie est appelée comme la conception et l'évaluation des performances. Elle se compose des chapitres 9 et 10. Enfin, quelques remarques de conclusion et perspective sont présentées. Les détails de chaque partie sont abordés dans le texte suivant.

Saeed Mian Qaisar Grenoble INP 184

Page 204: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Partie-I : L’ACQUISITION NON-UNIFORME DU SIGNAL

L’Echantillonnage Non-Uniforme Les signaux dans le monde réel sont analogiques par nature. Il est souvent nécessaire de traiter ces signaux en vue d'atteindre certains objectifs. Le traitement peut être effectué directement par l'électronique analogique, mais la plupart du temps, ces signaux sont numérisés, ce qui permet leur traitement numérique. Cette transformation a de nombreux avantages et elle est généralement préférable [1, 2]. La numérisation consiste essentiellement en deux processus élémentaires : l'échantillonnage et la quantification [1, 2]. Le Chapitre 2, traite de la procédure d'échantillonnage, qui convertit un signal analogique en sa représentation discrète. Dans le domaine temporel, il est obtenu en multipliant le signal en temps continue x(t) avec une fonction d'échantillonnage sF(t). D'après [5-8], le modèle généralisé de sF(t) a donné par l'équation 2.1.

! !"#

$#%

$%n

nF ttts & (2.1)

Ici, (t-tn) est la fonction de Dirac et {tn} est la séquence de l'instant d'échantillonnage. Ainsi, un signal échantillonné xs(t) peut être représenté par l'équation 2.2.

! ! ! !n

n

ns txtttxtx %$% "#

$#%

&. (2.2)

Dans le domaine des fréquences, l'échantillonnage est le produit de convolution entre le spectre du signal analogique et de la fonction d'échantillonnage. Si SF(f) est la transformée de Fourier de sF(t), alors il peut être représenté par l'équation 2.3.

! "#

$#%

$%n

ftj

FnefS

'2 (2.3)

Finalement, la transformée de Fourier du signal échantillonné Xs(f) peut être représenté par l'équation 2.4.

! ! !fSfXfX Fs *% (2.4)

Ici, X(f) est le spectre du signal d'entrée analogique. Le processus d'échantillonnage est directement influencé par les caractéristiques de {tn}. En fonction de la distribution de {tn}, le processus d'échantillonnage est découpé en catégories : l’uniforme et le non-uniforme. La théorie de l'échantillonnage a une grande histoire et beaucoup de documentation utile est disponible sur celui-ci, quelques exemples sont [3-13]. Le but de ce chapitre est de ne pas examiner l'ensemble du domaine. Toutefois, le principe de l'échantillonnage en général et de

Saeed Mian Qaisar Grenoble INP 185

Page 205: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

l’échantillonnage non-uniforme en particulier, sont brièvement représentés. Certains processus d’échantillonnage célèbre sont brièvement discutés. Une ressemblance entre les différents processus d'échantillonnage est montrée. L'aspect de l'échantillonnage non-uniforme comme un outil est également décrit. Il est démontré que, selon l'application ciblée, l'emploi d'un processus d'échantillonnage approprié peut donner des avantages remarquables. L'objectif de ce travail de thèse est de réaliser un traitement mobile intelligent, qui résulte dans les systèmes mobiles efficaces. Dans ce contexte, la LCSS est employé, dans le cas proposé [26, 29-31]. Les raisons de ce choix sont explicitées. En outre, le critère d'échantillonnage, ce qui garantit la bonne reconstruction du signal échantilloné par la LCSS, est aussi discuté.

Conversion Analogique Numérique Le DSP (Digital Signal Processing) a de nombreux avantages par rapport au traitement analogique [1, 2, 14]. Par conséquent, avec l'avènement récent de la technologie la plupart des tâches de traitement du signal ont été transférés du domaine analogique au domaine numérique [1, 2]. L'ADC (Analog to Digital Converter) est une composante essentielle d'un système du DSP. Il impose un impact majeur sur la performance du système en ensemble. Un ADC intelligent peut conduire vers une solution efficace et vice versa [8]. L'augmentation des sophistications dans les applications récentes comme radio logiciel, réseaux de capteurs, contrôle autonome, bio-informatique, etc. exigent des solutions intelligentes. Dans ce contexte, plusieurs progrès ont été réalisés dans le domaine de l'A/D (Analog to Digital) conversion. L'erreur introduite par cette transformation est discutée. Plusieurs paramètres pour analyser la performance de l’ADC sont décrits. Les principales caractéristiques des différentes architectures de l’ADC sont également présentées. En outre, les tendances de la performance de l'ADC au cours des dernières années sont également discutées.

Conversion Analogique Numérique par Traversée de niveaux La conversion A/D conditionnée par le signal d'entrée lui-même est bien placé pour acquérir les signaux non-stationnaires [8, 52, 53]. L'acquisition, conditionnée par le signal signifie que l'échantillon est acquis seulement quand le signal mesuré remplit les conditions définies. Souvent ces conditions sont les traverses des niveaux de référence prédéterminés. Un tel arrangement est nommé conversion A/D par traversée de niveaux ou level crossing [51-57]. Il adapte le taux d'acquisition en suivant les variations locales du signal d'entrée et donc réduit les données à traiter par rapport aux convertisseurs classiques. Ainsi, il apporte un emploi efficace des ressources du système comme la mémoire, énergie, largeur de bande de transmission, etc. [58-67]. Le concept du LCSS n'est pas nouveau et est connu au moins depuis les années 1950 [68]. Elle est également connue comme un chemin d'échantillonnage, qui est basé sur événement [69]. Dans le cas du LCSS, un échantillon est capturé seulement lorsque le signal analogique d'entrée x(t) traverse l'un des seuils prédéfinis. Les échantillons ne sont pas uniformément espacés dans le temps, car ils dépendent des variations du x(t), comme il ressort clairement de la figure 4.1. L'ensemble des niveaux est choisi de telle manière qu'il couvre la gamme d'amplitude !x(t) du signal analogique. Figure 4.1 montre une possibilité de placer des seuils dans une manière équidistants, qui sont séparés par un quantum q. Toutefois, les seuils peuvent aussi être espacés

Saeed Mian Qaisar Grenoble INP 186

Page 206: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

logarithmiquement ou avec une autre distribution [42]. En outre, les seuils peuvent également être réalisés de manière non-stationnaire [8].

xn-1

xn

tn-1 tn

dtn

Am

plitu

de A

xis

q

Time Axis

X(t)xn-1

xn

tn-1 tn

dtn

Am

plitu

de A

xis

q

Time Axis

X(t)

Figure 4.1. Chemin d'échantillonnage par Level crossing.

Dans le cas de la LCSS, cheque échantillon est une amplitude xn et un temps couple (xn, tn) d'une tn. xn est clairement égal à un des niveaux et tn est calculable en utilisant l'équation 4.1.

nnn dttt (% $1 (4.1)

Dans l’Equation 4.1, tn est un instant d’échantillonnage actuel, tn-1 est l’instant d’échantillonnage précédent et dtn est le temps écoulé entre les instants d'échantillonnage actuel et précédent. Pour la fin d'initialisation t1 et dt1 sont choisies comme zéro et de suite les instants d'échantillonnage {tn} du LCSS sont calculés en utilisant l'Equation 4.1. Les applications mobiles telles que les réseaux de capteurs distribués, l'électronique portable, etc. ont des ressources limitées et nécessitent les systèmes intelligents [97]. Dans ce contexte, plusieurs solutions efficaces fondées sur la LCSS ont été proposées [52, 54, 55, 58-67, 71-80, 82-84]. Les LCADCs sont de bons candidats pour les applications mobiles. Ils sont efficaces par rapport aux convertisseurs classiques en terme de la consommation d'énergie, l’émission électromagnétique, le bruit de traitement, la complexité du circuit et de la surface [51-57]. L'objectif du Chapitre 4 est de décrire brièvement les principales caractéristiques LCADCs. La théorie associée aux LCADCs est tout à fait différente par rapport aux convertisseurs classiques. Ainsi, ses principaux concepts sont étudiés. L’expression du SNR (Signal to Noise Ratio) pour LCADCs est dérivée. La conception asynchrone est un choix naturel pour l'implémentation des LCADCs [55-57]. Dans ce contexte, le principe de ces circuits asynchrones est brièvement décrit. Quelques réalisations intéressantes de LCADCs sont également étudiées. Une comparaison des LCADCs avec les convertisseurs sur-échantillonnés est également faite.

Saeed Mian Qaisar Grenoble INP 187

Page 207: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Partie-II : LES TECHNIQUES PROPOSEES

Sélection de L’activité La contribution de cette thèse commence par le chapitre 5. Comme précédemment, la motivation de ce travail est d'améliorer les outils de traitement du signal, afin de parvenir aux systèmes mobiles efficaces. Dans les chapitres précédents, il est soutenu que la LCSS est un bon choix pour les applications mobiles [52, 71-78]. Dans ce contexte, les LCADCs sont employés pour l'acquisition des données, dans les solutions proposées. Le signal obtenu avec le LCADC est non-uniformément espacé dans le temps, il ne peut donc pas être traité ou analysé en employant les techniques classiques [1, 2]. Le fenêtrage est une opération nécessaire pour l'acquisition des données, qui est demandé afin de répondre aux exigences de mise en œuvre pratique du système [1, 2, 85]. Le processus de fenêtrage du signal échantillonné obtenu avec la LCSS n'est pas mûr dans la littérature existante. Dans le Chapitre 5, deux nouvelles techniques de fenêtrage de parties actives du signal échantillonné non-uniformément sont présentées. La séquence des techniques proposées est indiquée sur la Figure 5.1.

LCADC

Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)Activity

Selection

(win)

Selected Signal

(xS,tS)

Local Parameters

LCADC

Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)B.P.F

[Fmin; Fmax]

Analog Signal

y(t)Activity

Selection

(win)

Selected Signal

(xS,tS)

Local Parameters

Figure 5.1. La séquence du processus de la sélection d’activité.

Le principe des solutions proposées consiste à utiliser les distances de temps entre les instants d'échantillonnage consécutives, qui peut être défini comme suit [71].

1$$% nnn ttdt (5.1)

Ici, tn est l’instantané d’échantillonnage actuel, tn-1 est l’instantané d’échantillonnage précédent et dtn est le délai entre les instants d’échantillonnage actuel et précédent. Selon [58, 71], dtn est une fonction des variations du signal d'entrée dans le temps. Pour la partie du signal avec une valeur de la pente élevée les valeurs de dtn seront plus petites et vice versa. En employant les valeurs de dtn, les suivants algorithmes de la sélection d’activités sont proposés. L’ASA ((Activity Selection Algorithm) sélectionne les parties pertinentes du signal échantillonné non-uniformément, obtenir avec le LCADC. Ce processus de sélection correspond à un fenêtrage rectangulaire de longueur adaptative. Elle définit une série de fenêtres choisies dans l'ensemble de la durée du signal. La capacité du sélection de l’activité est extrêmement importante pour réduire l’activité de traitement et par conséquent la consommation d’énergie du système proposé [58-67]. En effet, dans le cas proposé, aucun traitement n'est effectué au cours des parties du signal sans activité, qui est l'une des raisons du gain de calcul, obtenu par rapport au cas classique. L’ASA est défini comme suit.

Saeed Mian Qaisar Grenoble INP 188

Page 208: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

While (dtn T0/2 and Li Lref)

Li = L

i + dtn;

Ni = N

i + 1;

end Ici, dtn est donnée par l’équation 5.6. T0 = 1/fmin est la période fondamentale du signal x(t). T0 et dtn

détectent les parties actives du signal échantillonné non-uniformément. Si le temps mesuré dtn est supérieur à T0/2, alors x(t) est considérée comme sans activité. La condition dtn"T0/2 est choisie assurer le critère d'échantillonnage de Nyquist pour fmin. Lref est la longueur référence de fenêtre en seconde. Son choix dépend des caractéristiques du signal d'entrée et des ressources du système. La limite supérieure de Lref est posée par le nombre maximal d'échantillons que le système peut traiter à la fois. Alors que la limite inférieure sur Lref est posée par la condition Lref T0, qui doit être respectée afin d'obtenir une bonne représentation spectrale [58, 86, 87]. Li

représente la durée en secondes de le ith fenêtre sélectionnée Wi. Lref pose la limite supérieure sur Li. Ni représente le nombres des échantillons non-uniforme portant par Wi, qui établit sur la jth partie active du signal échantillonné non-uniformément. Ici, i et j appartiennent à l’ensemble des

nombres naturels *.

La boucle décrite ci-dessus, reprend pour chaque fenêtre, qui apparaît pendant la durée de l'observation du x(t). Chaque fois avant le début de la prochaine boucle, i est incrémenté et Ni et Ti sont initialisés à zéro. Le nombre maximal des échantillons Nmax, qui peuvent avoir lieu dans un Lref choisi, peut être calculé en utilisant la relation suivante.

maxmax .FsLN ref% (5.2)

L’EASA (Enhanced Activity Selection Algorithm) est une version modifiée de l'ASA [62, 63]. La principale différence entre l'ASA et de l'EASA est le choix de la limite supérieure de la longueur de fenêtre sélectionnée. Pour l'ASA, la durée du temps en secondes et pour l'AESA le nombre d'échantillons est choisi comme la limite supérieure. L’EASA est décrit comme suit.

While (dtn T0/2 and Ni Nref)

Ni = N

i + 1;

end

Similaire à l'ASA dans ce cas, les parties actives du signal échantillonné non-uniformément sont détecté par l'emploi de T0 et dtn. N

i représente le nombre des échantillons non-uniforme portant

Saeed Mian Qaisar Grenoble INP 189

Page 209: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

par Wi. Nref pose la limite supérieure sur Ni. La choix de Nref dépend des caractéristiques du signal d'entrée et les paramètres du système [62, 63]. La boucle décrite ci-dessus, reprend pour chaque fenêtre, qui apparaît pendant la durée de l'observation du x(t). Chaque fois avant le début de la prochaine boucle, i est incrémenté et Ni est initialisés à zéro. Afin d'obtenir une bonne représentation spectrale la condition Li

T0 doit être respectée [58, 86]. Afin de satisfaire à cette condition pour le pire cas, ce qui se présente pour Fsmax (cf. Equation 5.6), Nref est calculé pour une longueur de fenêtre appropriée LC. LC doit satisfaire la condition : LC # T0. Le processus de calculer Nref est donné par la Equation 5.3.

max.FsLN Cref % (5.3)

Il est à noter que, comme Lref dans le cas de l'ASA, le choix de LC a les même contraintes, la période fondamentale T0 du signal d'entrée et les ressources du système. La discussion ci-dessus décrite le principe de l'ASA et de l'EASA. Une distinction majeure entre eux est la manière dont ils réagissent aux variations instantanées de la fréquence du signal d'entrée. Ce phénomène est abordé dans le Chapitre 5. Les caractéristiques intéressantes des techniques proposées sont décrites. L'un d'eux est l'extraction de la valeur de la fréquence d'échantillonnage pour chaque fenêtre sélectionnée [58, 62]. Soit Fsi représente la fréquence moyenne d'échantillonnage pour Wi, alors elle peut être calculée en utilisant les équations suivantes.

iii ttL minmax $% (5.4)

i

ii

L

NFs % (5.5)

Ici, tmaxi et tmini sont le dernier et le premier temps de Wi. Il est clair que les FSi peuvent être spécifiques pour chaque fenêtre, selon Li et la pente de x(t) dans le cadre établissant par cette fenêtre [58, 62]. Les limites inférieures et supérieures sur Fsi sont posées par les Fsmax et Fsmin respectivement. Les Fsmax et Fsmin sont définies comme suit.

)12.(.2 maxmax $% MfFs (5.6)

)12.(.2 minmin $% MfFs (5.7)

Ici, fmax et fmin sont des fréquences maximale et minimale du signal d’entrée. Fsmax et Fsmin sont les fréquences maximale et minimale d’échantillonnage du LCADC. M est la résolution du LCADC en nombre des bits. Les autres caractéristiques communes de l'ASA et de l'EASA sont qu'ils peuvent sélectionner que les parties actives du signal échantillonné par le LCADC. En outre, ils peuvent aussi corréler la longueur de la fenêtre sélectionnée à la longueur d'activité du signal, portant en fenêtre [58-67].

Saeed Mian Qaisar Grenoble INP 190

Page 210: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Dans le cas classique, le processus de fenêtrage ne peut pas sélectionner seulement les parties actives du signal échantillonné. En outre, la longueur de la fenêtre reste statique et ne peut pas s’adapter selon la longueur d’activité du signal, portant en elle. Cette nature non-stationnaire de l’échantillonnage et le fenêtrage classique feront le système aura à traiter plus des données par rapport à la information pertinentes dans le x(t). Les performances des techniques proposées sont aussi illustrées avec l'aide d'un exemple d'étude. Il est démontré que l'emploi intelligent de ces techniques peut donner les solutions efficaces par rapport à l'approche classique [58-67].

Analyse spectrale du Signal Echantillonné

Non-Uniformément L’analyse des données, obtenus à la sortie du système est une clé pour caractériser sa performance [16-18]. La transformation en fréquence est souvent employée à cette fin [81]. Dans ce contexte, le concept de l'analyse spectrale est brièvement décrit. Le phénomène de la fuite spectrale est aussi discuté dans le Chapitre 6. La tendance vers l'échantillonnage non-uniforme est de plus en plus croissante dans les nombreuses applications récentes [8, 10-13, 51-67, 72-80, 82-84]. Le domaine de l'analyse du signal échantillonnée non-uniformément est en pleine évolution, un grand nombre de contributions importantes ont été présentées à cet égard et quelques exemples sont [87-90]. Le GDFT (General Discrete Fourier Transform) et l'algorithme du Lomb sont les outils les plus couramment employées pour l’analyse des signaux échantillonnés non-uniformément [58, 87]. Leurs caractéristiques principales sont brièvement décrites dans le Chapitre 6. L'objectif de ce travail de thèse est de parvenir aux systèmes mobiles efficaces. Être un processus conditionné par le signal, la LCSS est favorable pour les applications mobiles [51-53, 55-57, 58-67, 71-80, 82-84]. Dans ce contexte, l'acquisition du signal est effectuée en employant les LCADCs dans les solutions proposées [58-67]. Afin d'analyser correctement le signal échantillonné par la LCSS, une solution efficace est proposée. Elle est réalisée par la combinaison intelligente des outils de traitement du signal non-uniforme et uniforme. La technique proposée consiste principalement en trois étapes. La première étape consiste à sélectionner les parties pertinentes du signal échantillonné non-uniformément. Elle est réalisée en employant l'ASA (Activity Selection Algorithm). La deuxième étape consiste à re-échantillonner les données de manière uniforme, portant dans chaque fenêtre sélectionnée. Il peut être fait en employant une méthode d'interpolation, le choix de méthode est dépendant de l'application. Enfin, la troisième étape consiste à calculer la FFT (Fast Fourier Transform) de chaque bloc re-échantillonné en vue d'obtenir son spectre. Schéma du système proposé est illustré par la Figure 6.1. Figure 6.1 montre que, dans la solution proposée, le signal non-uniformément échantillonné, obtenu à la sortie du LCADC est sélectionné par l'ASA. Les caractéristiques de chaque partie sélectionnée sont analysées et de suite ont employées pour adapter les paramètres du système comme la fréquence de re-échantillonnage et la forme de la fenêtre etc.

Chaque fenêtre sélectionnée, obtenues avec l'ASA peut être avoir une fréquence d'échantillonnage spécifique FS

i (cf. Equation 5.5). Dans le système proposé, une fréquence de référence d'échantillonnage Fref est choisie. Le choix est fait de sorte qu'elle demeure plus

Saeed Mian Qaisar Grenoble INP 191

Page 211: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

grande et plus proche de la fréquence d'échantillonnage de Nyquist FNyq. La condition est formellement exprimée comme suit.

max.2 fFF Nyqref %) (6.1)

Ici, fmax est la fréquence maximale du signal d’entrée. Le signal sélectionné est re-échantillonné uniformément avant de procéder vers plus de traitement. La fréquence de re-échantillonnage Frsi pour Wi est choisis selon les valeurs de Fsi et Fref. Une fois que le re-échantillonnage est fait, il existe des échantillons Nri en Wi. Le choix de Frsi est crucial, et la procédure de ce chois est détaillée dans la suite.

AADC EASA

Selected Signal

(xs,ts)

Sampling Frequency

(Fsi)X(fi)

Comparator

LCADC

Band-Limited

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

ASA Resampler

FFT

Uniformly

Sampled Signal

(xrn)

ComparatorReference Sampling Frequency

(Fref)

Resampling Frequency

(Frsi)

Wn

Local Parameters for

(Wi)

1

0

Window SelectorReference Parameters

Window Decision

(Di)

AADC EASA

Selected Signal

(xs,ts)

Sampling Frequency

(Fsi)X(fi)

Comparator

LCADC

Band-Limited

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn,tn)

ASA Resampler

FFT

Uniformly

Sampled Signal

(xrn)

ComparatorReference Sampling Frequency

(Fref)

Resampling Frequency

(Frsi)

Wn

Local Parameters for

(Wi)

1

0

Window SelectorReference Parameters

Window Decision

(Di)

Figure 6.1. Schéma du système de l'analyse spectrale. Pour le cas, Fsi

> Fref, Frsi est choisi comme : Frsi = Fref. Il est fait en vue de re-échantillonner les

données sélectionnées, établit en Wi plus proche de la fréquence de Nyquist. Il réduit la charge de calcul du système proposé de deux façons. Tout d'abord, en évitant les interpolations de données inutile au cours de la procédure de re-échantillonnage et d'autre part, en évitant le calcul du spectre des échantillonnées inutiles [58]. Pour le cas, Fsi

" Fref, Frsi est choisi comme : Frsi= Fsi. Dans ce cas, il semble que les données portant en Wi sont peut être re-échantillonnées à une fréquence qui est inférieure à la fréquence de Nyquist de x(t). Selon [52, 59, 60], si l'amplitude du signal !x(t) est de l'ordre de la gamme 2Vmax, puis pour un choix adapté de la résolution du LCADC, M (dépendant de l’application), le signal traverse assez de seuils consécutifs. Par conséquent, il est sur-échantillonné localement dans le temps par rapport à sa bande passante locale. Donc, il n'y a pas de problème de repliement. Le re-échantillonnage est effectué en utilisant le processus d'interpolation. Le processus d'interpolation change les propriétés du signal re-échantillonné par rapport au signal original. L'erreur d'interpolation dépend de la technique utilisée pour le but de re-échantillonnage [91, 92]. Être simple et robuste, la méthode NNRI (Nearest Neighbour Resampling Interpolation) est employée pour le re-échantillonnage des données. Dans la figure 6.1, le bloc du sélecteur de fenêtre réalise la condition, donné par l'expression 6.2. La sortie de ce block est la décision Di, qui contrôle l'état d’interrupteur pour Wi.

Saeed Mian Qaisar Grenoble INP 192

Page 212: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

! *+

,-.

/ 0$%1 $

201

1

TttTdandLLIf i

end

ii

ref

i (6.2)

Dans l’expression 6.2, Li est la longueur de la fenêtre Wi en secondes. Lref est la longueur référence de la fenêtre en secondes, qui est choisie en fonction des caractéristiques du signal d'entrée et des ressources du système [58]. Lref pose la limite supérieure sur Li. t1

i et tendi-1

représentent le premier et le dernier instant de l'échantillonnage des fenêtres sélectionné ith and the (i-1)th respectivement. Conjointement, l'ASA et le sélecteur de fenêtre, fournissent un moyen efficace de réduction de la fuite spectrale. Habituellement, une fenêtre de la fonction cosinus est employée pour réduire le problème de troncature du signal. Dans le cas proposé, tant que la condition 6.2 est vraie, le problème de fuite est réglé en évitant la troncature du signal [58]. Comme aucune troncature du signal d’est effectuée, donc pas de fenêtre cosinus est nécessaire. Dans ce cas, Di est choisie comme 1. Dans l’autre cas, une fenêtre cosinus est employée pour réduire le problème de la troncature du signal. Dans ce cas, Di est choisie comme 0. Les caractéristiques intéressantes de la technique proposée sont illustrées à l'aide d'un exemple. Sa comparaison avec la GDFT et l’algorithme du Lomb est effectuée. Il est fait du point de vue de la qualité du spectre et la complexité du calcul. Les résultats montrent que la technique proposée donne un gain drastique en complexité du calcul et à même temps offrant des résultats de meilleure qualité. L'approche proposée est intéressante pour les signaux non-stationnaires. Il est particulièrement bien adapté pour les signaux qui restent constants la plupart du temps, et varient sporadiquement. Pour ce type des signaux la technique proposée peut obtenir un gain drastique en complexité du calcul par rapport aux approches classiques. Il est réalisé en raison des caractéristiques intéressantes de la technique proposée, qui sont la sélection d'activité, l'extraction des caractéristiques locales et de la représentation spectrale efficace. Ils résultent en une réduction drastique du nombre total des opérations en conséquence de la consommation d'énergie par rapport au cas classique.

Filtrage à Fréquence d’Echantillonnage Adaptative

Conditionnée par le Signal Les techniques de filtrage classique sont invariantes dans le temps. Ils traitent le signal d'entrée en utilisant un filtre d’ordre fixe, qui fonctionne à une fréquence d'échantillonnage fixe [1, 2, 14]. Comme la plupart des signaux réels sont non stationnaires, donc la nature en temps invariant des techniques du filtrage classique donne une augmentation inutile de l'activité de traitement [59-61]. Cet inconvénient de filtrage classique peut être résolu dans une certaine mesure en employant les techniques de filtrage au taux multiple (multirate filtering) [94-98]. En suivant le principe de filtrage Multirate, les techniques du filtrage au taux adaptative sont développées [59-61, 64, 65, 67]. Le terme taux adaptatifs donne une indication vers la caractéristique des techniques proposées, qui est de corréler les paramètres du système en fonction des variations locales du signal d'entrée. Cette caractéristique est obtenue en combinant intelligemment les outils de traitement du signal non-uniforme et uniforme.

Saeed Mian Qaisar Grenoble INP 193

Page 213: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Le LCADC, l’ASA et le re-échantillonnage construit la base de l'approche proposée (cf. Figure 7.1). Le signal d'entrée analogique x(t) est acquis avec le LCADC. Le signal non-uniformément échantillonnée, obtenu avec le LCADC peut être utilisé directement pour le filtrage numérique [13, 83]. Alors que dans le cas proposé, le non-uniformité du signal échantillonné avec la LCSS est utelisé : Pour sélectionner uniquement les parties pertinentes du signal échantillonné non-

uniformément.

Pour extraire les caractéristiques locales du signal d'entrée et d'adapter les paramètres

du système en conséquence.

Le processus de sélection d'activités et l'extraction des paramètres locaux sont désigné comme l'ASA. Enfin, le signal sélectionné est re-échantillonnée uniforme avant de procéder vers une opération de filtrage classique. En combinaison le LCADC, l’ASA et le re-échantillonner faire pour réaliser ce qui suit. Echantillonnage aux taux adaptatif (seulement le nombre des échantillons pertinents à

traiter).

Filtrage aux taux adaptatif (seulement le nombre des opérations pertinentes pour

délivrer cheque échantillon filtrer).

La réalisation des objectifs définis ci-dessus assure un gain drastique de calcul des techniques proposées par rapport aux techniques de filtrage classiques. Les étapes à les réaliser sont détaillées dans la suite.

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local

Parameters for

(Wi)

Filtered Signal

(yn)

LCADC ASA

Parameters

Adaptor for

(Wi)

Adapted Parameters for

(Wi)Reference

Parameters Adapted FIR Filter

for (Wi)

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local

Parameters for

(Wi)

Filtered Signal

(yn)

LCADC ASA

Parameters

Adaptor for

(Wi)

Adapted Parameters for

(Wi)Reference

Parameters Adapted FIR Filter

for (Wi)

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Reference FIR

Filtrers Bank

h1k

h2k

hQk

Figure 7.1. Schéma du système des techniques du filtrage proposé. Le processus d'adaptation des taux d’échantillonnage selon le signal d’entrée est similaire dans les techniques proposées. Il est réalisé en employant les caractéristiques intéressantes de le

Saeed Mian Qaisar Grenoble INP 194

Page 214: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

LCADC et de l'ASA [58-61, 64, 65, 67]. La fréquence d'échantillonnage Fsi pour la fenêtre Wi peut être calculée en utilisant l'équation 5.5. Afin d’utiliser un algorithme de filtrage classique, le signal sélectionné, qui se présente en Wi est re-échantillonnée uniformément avant de passer à l'étape de filtrage (cf. Figure 7.1). Les caractéristiques locales de la partie sélectionnée du signal sont utilisées pour choisir sa fréquence de re-échantillonnage Frsi. Une fois que le re-échantillonnage est fait, il existe Nri échantillons dans Wi. Le choix de Frsi est crucial et cette procédure de chois est détaillée dans la suite. La manière de réaliser le filterage au taux adaptatif est distincte dans chaque technique proposée. La procédure pour chaque technique est détaillée dans les parties suivantes.

L’ARCD (Activity Reduction by Chosen Filter Decimation) technique : Dans ce cas, une banque de filtres de référence est conçu hors ligne pour une application spécifique, par l'exploitation des statistiques du x(t). Bien que la conception des filtres de référence, le pire cas soit pris en compte. Ici, le pire cas, indique vers la fréquence d'échantillonnage maximale possible dans le system Fsmax (cf. Equation 5.6). Au cours du traitement en ligne, un filtre approprié est choisi pour Wi, permis de la banque des filtres de référence pré-calculé. Cette chois est effectué sur la base de Fref et la valeur effective de Fsi. Nous allons présenter ici la notation de l'indice c pour faire la distinction entre le filtre de référence choisie et de l'ensemble des filtres de référence. Le filtre de référence dont la valeur correspondante du Frefc est le plus proche et une plus grande ou égale à la Fsi est choisx pour Wi. Il fait de sélectionner le filtre d’ordre approprié pour les données portant en Wi.

Lors du calcul en ligne, Frefc et la fréquence locale d’échantillonnage Fsi sont employées pour définir la fréquence de re-échantillonnage Frsi et le facteur de décimation di. La Frsi est employée pour re-échantillonnée uniformément le signal sélectionné portant en Wi, alors que di est employé pour décimer hck pour filtrer les donnes portant en Wi. Ici, hck représente le filtre de référence choisx pour Wi. k est l’indice du hck. La procédure complète de calculer Frsi et hj

i pour l’ARCD est décrit par la Figure 7.2.

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hck

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hck

Figure 7.2. Flot de conception de la technique ARCD.

Saeed Mian Qaisar Grenoble INP 195

Page 215: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Dans la figure 7.2, hji est le filtre décimé pour Wi. Di est la partie intégrale de di, qui est compute

comme: Di=floor(di). A partie de la Figure 7.2, c’est clair que pour toute valeur de di (integrale ou flottant), la décimation du hck est fait avec Di dans la technique ARCD. Une décimation simple peut réduire l’énergie du filtre décimé par rapport au filtre de référence. Elle conduira à une version atténuée du signal filtré. Di est une bonne estimation du ratio entre l'énergie du filtre de référence choisie et le filtre décimé. Donc, cet effet de décimation est compensé par la pondération du filtre décimé hj

i avec Di (cf. Equation 7.1).

kD

ii

j ihcDh.

. (7.1)

L’ARCR (Activity Reduction by Chosen Filter Resampling) technique : Les étapes à suivre pour atteindre le filtrage aux taux adaptatifs sont communes dans les deux techniques : l’ARCR et l’ARCD, sauf pour la valeur fraction de la di. Dans la technique ARCD la valeur fractionnelle de di est convertie en un intégrale Di, qui est en suit employé pour décimé hck. En contraire, dans le cas de la technique ARCR di est directement employé pour décime hck. La décimation fractionnelle du hck est obtenue par le re-échantillonnage du hck à Frsi. Pour la technique ARCR, pondération du hj

i est fait avec di. La procédure complète de calculer Frsi et hji pour l’ARCR est

décrit par la Figure 7.3.

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

hji = Resample(hck @ Frsi)

Frsi = Fsi

hji = hck

hji =di . hj

i

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF No IF YesFsi < Frefc

Frsi = Frefc

IF YesIF No

hji = Resample(hck @ Frsi)

Frsi = Fsi

hji = hck

hji =di . hj

i

Figure 7.3. Flot de conception de la technique ARCR.

L’ARRD (Activity Reduction by Reference Filter Decimation) technique: La technique ARRD est une modification de la technique ARCD. Dans ce cas, un seul filtre de référence préconçue est employé au lieu d'une banque des filtres de référence. Il résulte en deux avantages par rapport à la technique ARCD. Premièrement, il réduit les besoins de mémoire du système et deuxièmement, il évite le processus de sélection de filtre en ligne pour Wi. Le principle de l’ARRD est la conception hors ligne d’un filtre de référence pour une fréquence référence de re-échantillonnage Fref (cf. Expression 6.1). Noter bien qu’ici Fref soit une fréquence de référence unique, alors que Fref dans le cas d’ARCD représente un ensemble des fréquences de référence. La procédure complète de calculer Frsi et hj

i pour l’ARRD est décrit par la Figure 7.4.

Saeed Mian Qaisar Grenoble INP 196

Page 216: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

di = Fref / Frsi

Di = floor(di)

Di = di

hji =Di .hD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hk

di = Fref / Frsi

Di = floor(di)

Di = di

hji =Di .hD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsi

hji = hk

Figure 7.4. Flot de conception de la technique ARRD.

L’ARRR (Activity Reduction by Reference Filter Resampling) technique : Semblable à l’ARRD, l’ARRR est une modification de la technique ARCR. Les étapes à suivre pour atteindre le filtrage au taux adaptatif sont communes dans les deux techniques : l’ARRD et l’ARRR, sauf pour la valeur fraction de la di. Contraire à l’ARRD, dans le cas de la technique ARRR, di est directement employé pour décime hck. La procédure complète de calculer Frsi et hj

i pour l’ARRR est décrit par la Figure 7.5.

di = Fref / Frsi

Di = floor(di)

Di = di

hji =di .hd

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

hji = Resample(hk @ Frsi)

Frsi = Fsi

hji = hk

hji =di . hj

i

di = Fref / Frsi

Di = floor(di)

Di = di

hji =di .hd

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

hji = Resample(hk @ Frsi)

Frsi = Fsi

hji = hk

hji =di . hj

i

Figure 7.5. Flot de conception de la technique ARRR. Les complexités de calcul des techniques proposées sont déduites et comparées avec la technique classique. Il est démontré que les techniques proposées obtiennent un gain de plus d'un ordre en termes d’additions et de multiplications par rapport à la technique classique. Il est réalisé en raison de la bénéficie commune de la LCADC, l'ASA et le ré échantillonnage, car ils permettent l'adaptation des paramètres du système (Fsi, Frsi, Ni, Nri, di and Pi) en exploitant les variations

Saeed Mian Qaisar Grenoble INP 197

Page 217: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

locales du signal d’entrée. Il réduit considérablement le nombre total des opérations et donc la consommation d'énergie par rapport au cas classique. Une comparaison de la complexité est aussi effectuée entre les différentes techniques proposées en Chapitre 7. Les Méthodes pour calculer les erreurs de traitement des techniques proposées ont été élaborées. Il est démontré que les erreurs introduites par les techniques proposées sont mineures, dans le cas étudié. En outre, plus de précisions peuvent être obtenues en augmentant la résolution du AADC et l’ordre de l’interpolation. Ainsi, une solution appropriée est peut être proposée pour une application donnée, en faisant un bon compromis entre la précision et le niveau de la complexité de calcul. Les mises à jour des techniques du filtrage au taux adaptatif sont proposées en combinant intelligemment les caractéristiques intéressantes de différentes techniques précédentes proposées. Leurs principes sont brièvement décrits comme suit. L’EARD (Enhanced Activity Reduction by Filter Decimation) technique: L’EARD est une combinaison des techniques l’ARCD et l’ARRD. L'idée est de parvenir à une solution, qui reste efficace en complexité de calcul comme l’ARRD tout en fournissant des résultats de meilleure qualité comme l’ARCD. La procédure complète de calculer Frsi et hj

i pour l’EARD est décrit par la Figure 7.6.

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsihji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =Di .hcD

i.k

IF No IF YesFsi < Fref

Frsi = Fref

IF YesIF No

Frsi = Fsi /Di

Frsi = Fsihji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

Figure 7.6. Flot de conception de la technique EARD. L’EARR (Enhanced Activity Reduction by Filter Resampling) technique: Semblable à l’EARD, l’EARR est une combinaison des techniques l’ARCR et l’ARRR. La procédure complète de calculer Frsi et hj

i pour l’EARR est décrit par la Figure 7.7. L’ARDI (Activity Reduction by Filter Decimation/Interpolation) technique: L’ARDI est une modification de la technique l’EARR. Elle relaxe a plus le choix de Frefc pour Wi. L’ARDI est différent par rapport à l’EARR en deux façons. Premièrement, pour le choix de Frefc quand Fsi<Fref et deuxièmement, pour le cas de filtrage quand Fref>Frefc Fsi. La procédure complète de calculer Frsi et hj

i pour l’ARDI est décrit par la Figure 7.8.

Saeed Mian Qaisar Grenoble INP 198

Page 218: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

Figure 7.7. Flot de conception de la technique EARR.

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck

Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

Fsi < Frefc

ui = Frsi / Frefc

IF Yes IF No

hji =1/ui . hj

i

hji = Resample(hck @ Frsi)

IF No IF YesFsi < Fref

Frsi = Fref

hji = hck

Frsi = Frefc

hji = hck

Fsi = FrefcIF Yes IF No

di = Frefc / Frsi

Di = floor(di)

Di = di

hji =di .hcd

i.k

IF YesIF No

Frsi = Fsi

hji =di . hj

i

hji = Resample(hck @ Frsi)

Fsi < Frefc

ui = Frsi / Frefc

IF Yes IF No

hji =1/ui . hj

i

hji = Resample(hck @ Frsi)

Figure 7.8. Flot de conception de la technique ARDI. Une comparaison des mises à jour est effectuée avec les techniques précédentes. Les résultats montrent que les versions améliorées donnent les meilleurs résultats par rapport aux techniques précédentes, en termes de l'efficacité de calcul et la qualité de traitement. De plus, une architecture à niveau système, qui est commune à toute la technique proposée est également décrite dans le Chapitre 7.

Saeed Mian Qaisar Grenoble INP 199

Page 219: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Analyse par Résolution Adaptative Conditionnée par le Signal

Presque tous les signaux réels sont non-stationnaires. Le contenu des fréquences de ces signaux varie avec le temps. Pour une propre caractérisation de ces signaux, une représentation temps-fréquence est nécessaire. Classiquement, le STFT (Short-Time Fourier Transform) est employé à cette fin [99]. La limitation de STFT est sa résolution fixe en temps-fréquence. Pour surmonter cet inconvénient une version améliorée de le STFT est conçue [62, 63, 66]. L'idée est d'adapter la résolution en temps-fréquence ainsi que la charge de calcul, en suivant les caractéristiques locales du signal d'entrée. Afin de réaliser cette idée, une combinaison intelligente des outils non-uniforme et uniforme de traitement du signal est utilisée. Le principe de la technique proposée est représenté sur la figure 8.1.

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local Parameters for

(Wi)

Windowed Signal

(xwn)

LCADC EASA

Parameters Adaptor and

Window Selector for

(Wi)

Window Decision

(Di)

Reference

Parameters

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Resampling

Frequency

(Frsi)

DFT

X [ i, fi ]

Wn

1

0

(Win)

Band-Pass Filtered

Analog Signal

x(t)

Non-Uniformly

Sampled Signal

(xn , tn)

Resampler

Selected Signal

(xs , ts)

Uniformly

Sampled Signal

(xrn, trn)

Local Parameters for

(Wi)

Windowed Signal

(xwn)

LCADC EASA

Parameters Adaptor and

Window Selector for

(Wi)

Window Decision

(Di)

Reference

Parameters

Band-Pass

Filter

[fmin; fmax]

Analog Signal

y(t)

Resampling

Frequency

(Frsi)

DFT

X [ i, fi ]

Wn

1

0

(Win)

Figure 8.1. Schéma du STFT proposé.

Le signal non-uniformément échantillonné, obtenu à la sortie du LCADC est sélectionné par l'EASA (Enhanced Activity Selection Algorithm). Les caractéristiques de chaque partie sélectionnée sont analysées et de suite ont employé pour adapter les paramètres du système comme la fréquence de re-échantillonnage, la forme de la fenêtre et la résolution temps-fréquence, etc. Chaque fenêtre sélectionnée obtenue avec l'EASA peut avoir une fréquence d’échantillonnage spécifique FS

i (cf. Equation 5.5). Le signal sélectionné est re-échantillonné uniformément avant de procéder vers plus de traitement. La fréquence de re-échantillonnage Frsi pour Wi est choisie selon la méthode, montrer par la figure 8.2. Une fois que le re-échantillonnage est fait, il existe des échantillons Nri en Wi. Les paramètres extraits pour Wi sont passés au bloc d'adaptateur des paramètres et de sélection de fenêtre (cf. Figure 8.1). Il emploie les paramètres extraits avec les paramètres de référence pour décider Frsi et la forme du fenêtrage pour Wi. La procédure de décision de Frsi est claire à partir de la 8.2. La décision sur la forme du fenêtrage pour Wi est faite sur la base de la condition suivante.

Saeed Mian Qaisar Grenoble INP 200

Page 220: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

! " #$

%&'

( )* + *

201

1

TttTdandNNif i

end

ii

ref

i (8.1)

Ici, t1

i représente le premier instant d’échantillon de Wi et tendi-1 représente le dernier instant

d’échantillon de Wi-1.

IF No IF YesFsi < Fref

Frsi = Fref

(xrn, trn) = Resample [ (xS, tS) @ Frsi ]

Frsi = Fsi

IF No IF YesFsi < Fref

Frsi = Fref

(xrn, trn) = Resample [ (xS, tS) @ Frsi ]

Frsi = Fsi

Figure 8.2. Schéma de décider Frsi pour Wi.

Conjointement, l'EASA et le sélecteur de fenêtre, fournissent un moyen efficace de réduction de la fuite spectrale. Habituellement, une fenêtre cosinus est employée pour réduire le problème de troncature du signal. Dans le cas proposé, tant que la condition 8.1 est vraie, le problème de fuite est réglé en évitant la troncature du signal [62, 63, 66]. Comme aucune troncature du signal n’est pas effectuée, donc pas de fenêtre cosinus est nécessaire. Dans ce cas, Di est choisie comme 1. Dans l’autre cas une fenêtre cosinus est employée pour réduire le problème de la troncature du signal. Dans ce cas, Di est choisie comme 0. La technique proposée effectue l’analyse à la résolution adaptative en temps-fréquence, qui n'est pas réalisable avec la STFT classique. Il est réalisé par l'adaptation de Frsi, Li et Nri selon les variations locales du x(t). Donc, la résolution en temps !ti et en fréquence !f i de la STFT proposée sont peut être spécifique pour Wi. Ils ont defines par les equations 8.2 et 8.3 respectivement.

ii Lt , (8.2)

i

ii

Nr

Frsf , (8.3)

En raison de cette résolution adaptative en temps-fréquence, la STFT proposée est désignée comme l’ARSTFT (Adaptive Resolution STFT). Cette nature adaptative du ARSTFT conduit à un gain drastique de calcul par rapport à la STFT classique. Il est atteint d'abord, en évitant le traitement des échantillons inutile et d'autre part, en évitant l'utilisation des fenêtres cosinus autant que la condition 8.1 reste vrai. L’ARSTFT est définie comme suit.

Saeed Mian Qaisar Grenoble INP 201

Page 221: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

- . ! "/ 0- .12

*

**

2

2

...2...,Re,

i

i

i

i

Nr

Nrn

nfji

nSS

ii ewtxsamplefX

3

3

43

3 (8.4)

Ici, "i et f i sont le temps central et l’index de fréquence pour Wi respectivement. f i est normalisée par rapport à Frsi. n est l'index des points de données re-échantillonnées portant en Wi. La notation wn

i représente que la longueur et la formede de la fênetre Li (rectangle ou cosinus) sont peut être spécifique pour Wi. L’ARSTFT est plus performante que la STFT. Le premier avantage de l’ARSTFT sur la STFT est la résolution adaptative en temps-fréquence et le deuxième avantage est le gain de calcul. Ces caractéristiques intéressantes de l’ARSTFT sont atteintes en raison du bénéfice commun de l’AADC, l'EASA et du re-échantillonnage. Ils permettent d'adapter les Fsi, Frsi, Ni, Nri et wn

i en exploitant les variations locales du x(t).

Saeed Mian Qaisar Grenoble INP 202

Page 222: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Partie-III : L’EVALUATION DE LA PERFORMANCE

La Résolution Effective d’un Convertisseur Analogique

Numérique à Fréquence d’échantillonnage Adaptatives Dans les chapitres précédents, les techniques du traitement et de l’analyse du signal au taux adaptatif sont proposées. Il est démontré que les techniques proposées restent efficaces en termes de calcul tout en fournissant des résultats de qualité comparable par rapport à des approches classiques. Les solutions proposées sont basées sur la sélection d’activité ainsi que l’extraction des caractéristiques locales du signal d’entrée. Ils sont réalisés par l'emploi d'un mélange intelligent des outils non-uniforme et uniforme de traitement du signal. Les composants de base employés dans chaque solution proposés sont le LCADC, l’algorithme de la sélection d’activité et le re-échantillonnage (cf. Chapitres 6, 7 et 8). Conjointement, ils adaptent le taux d'acquisition des données et les paramètres du système en suivant les caractéristiques locales du signal d'entrée. La combinaison de ces outils forme une conversion analogique numérique intelligente au taux adaptatif. Donc cette combinaison intelligente est nommée comme l’ARADC (Adaptive Rate ADC). Le schéma de l’ARADC est illustré par la figure 9.1.

LCADC Non-Uniformly

Sampled Signal

(xn, tn)

ASA

or

EASAResampler

Selected Signal

(xs, ts)

Uniformly

Sampled Signal

(xrn, trn)

Band Limited

Analog Signal

x(t)

Resampling Frequency

(Frsi)

LCADC Non-Uniformly

Sampled Signal

(xn, tn)

ASA

or

EASAResampler

Selected Signal

(xs, ts)

Uniformly

Sampled Signal

(xrn, trn)

Band Limited

Analog Signal

x(t)

Resampling Frequency

(Frsi) Figure 9.1. Schéma de l’ARADC.

La résolution effective de l’ADC est un paramètre commun pour caractériser sa performance. Parmi les différentes mesures de la résolution, la SNR est plus souvent employée pour la caractérisation de l’ADC [16, 17]. Par conséquent, dans le Chapitre 9, le SNR de l’ARADC est mesuré et ensuite est utilisée pour calculer sa résolution effective. La Figure 9.1 montres les différentes étapes de l’ARADC. Chaque étape a son impact sur l'ENOB (Effective Number of Bits) de l’ARADC. Enfin de quantifier l'impact de chaque étape, les sources d'erreur de chaque bloc sont discutées. Une méthode pour calculer la SNR de chaque étape est élaborée. L'exactitude de la méthode proposée est confirmée par montrer la cohérence entre les résultats de la simulation et les résultats théoriques. Il est montré que pour un choix approprié des paramètres (Ttimer, !a et l’ordre de l’interpolation), l’ARADC atteint la résolution effective plus élevée par rapport à l’ADC classique pour chaque valeur donne de M. Ici, M démontre la résolution définie du convertisseur en termes des bits.

Saeed Mian Qaisar Grenoble INP 203

Page 223: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Pour une application ciblée un ensemble approprié des paramètres (M, Ttimer, !a et l’ordre d’interpolation) devrait être trouvé, qui offre un meilleur compromis entre la complexité de calcul du système et la qualité rendue de la sortie, tout en assurant la propre reconstruction du signal numérisé.

Etude de Performance des Techniques Proposées : application

à des signaux réels Dans les chapitres précédents, les caractéristiques intéressantes des techniques proposées ont été démontrées avec l'aide des exemples. Pour une meilleure compréhension, les signaux simples ont employés. Habituellement, les signaux réels sont plus complexes. Par conséquent, les efficacités des techniques proposées pour les signaux réels sont évaluées en Chapitre 10. Les performances des techniques de filtrage aux taux adaptatifs proposé sont étudiées pour une application de parole au Chapitre 10. Une comparaison entre les techniques de filtrage proposées et la technique classique est faite. Elle est faite en termes de la complexité de calcul et la qualité du filtrage. Il est démontré que les techniques proposées donnent un gain de calcul de plus d'un ordre par rapport au cas classique. Il est réalisé en raison de la bénéficie commune entre le LCAADC, l'ASA et le re-échantillonnage, car ils permettent l'adaptation en ligne des paramètres du système en exploitant les variations locales du signal d'entrée. Il réduit considérablement le nombre total des opérations et donc la consommation d'énergie par rapport au cas classique. Il est également démontré que les résultats obtenus avec les techniques de filtrage proposés sont de qualité comparable à celui obtenu dans le cas classique. La performance de l’ARSTFT est comparée avec la STFT classique, pour un signal de chirp. Il est montré que l’ARSTFT a une meilleure performance que la STFT classique, pour le cas étudié. Le premier avantage de l’ARSTFT par rapport à la STFT classique est la résolution adaptative en temps-fréquence et le deuxième avantage est le gain en termes de la complexité du calcul. Ces caractéristiques intéressantes de l’ARSTFT sont atteintes en raison de la bénéficie commune des outils de traitement de signal non-uniforme et uniforme. Ils permettent d'adapter les paramètres (Fsi, Frsi, Ni, Nri et wn

i) en exploitent les variations locales du x(t). La qualité de traitement est aussi déterminée. Il est démontré que l’erreur obtenue pour les paramètres choisix est faible. En outre, une plus grande précision est peut être réalisable en augmentant la résolution de l’AADC et l’ordre de l’interpolation. Alors, une amélioration de la précision est peut être réalisable avec une augmentation de l'activité de traitement. La performance de l’ARADC en terme de l’acquisition du signal est démontrée pour un signal de parole. Il est montré que l’ARADC donne un gain de compression 2.3 fois par rapport à l’ADC classique, pour le cas étudié. Il est connu que pour un signal d’entrée, la fréquence d’échantillonnage du LCADC varie en proportion à M [51-57]. Alors, en réduisant M, une amélioration dans le gain de compression de l’ARADC est réalisable. Pendant que la réduction de M, deux aspects devrait être vérifié. Tout d'abord, le signal obtenu avec le LCADC doit satisfaire le critère de la reconstruction [100, 101]. Deuxièmement, pour les paramètres fixés de l’ARADC, une réduction en M appels pour une augmentation de l’ordre d’interpolation. Ce enfin de garder la même ENOB du ARADC. Donc, pour une application ciblée, une possibilité de réduction en M dans une certaine mesure est définie par le critère de la reconstruction, ce qui conduira à une augmentation de taux de compression du ARADC. Mais d'autre part, elle augmentera la charge de traitement pour chaque observation re-échantillonnée. Par conséquent, un choix approprié de

Saeed Mian Qaisar Grenoble INP 204

Page 224: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

M devrait être fait pour assurer une propre reconstruction du signal avec un gain maximale de la compression.

Saeed Mian Qaisar Grenoble INP 205

Page 225: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

CONCLUSION ET PERSPECTIVE

Conclusion

La motivation de ce travail de thèse a été de concevoir des solutions intelligentes de traitement du signal pour les systèmes mobiles, qui offrent un équilibre favorable entre les coûts du système, la vitesse, la surface, la qualité de production et surtout la consommation d'énergie. Il peut être réalisé en réorganisant intelligemment les théories et les architectures associées aux systèmes mobiles. Dans ce contexte, le traitement conditionné par le signal avec la conception des circuits sans horloge sont employés. Il en résulte une efficacité énergétique par l'adaptation intelligente de l’activité de traitement selon les caractéristiques locales du signal d'entrée. La sélection d’activité et l'extraction de caractéristiques locales sont les bases de l'approche proposée. Celles-ci sont formées par l'emploi d'un mélange subtil des outils non-uniforme et uniforme de traitement du signal. Dans les solutions proposées, l'acquisition des données est effectuée avec les LCADCs. Les LCADCs sont basées sur la LCSS. Ils adaptent ses taux d’acquisition en fonction des caractéristiques locales du signal d’entrée. LCADCs produisent les échantillons non-uniformément repartitionnées en temps, qui permettent une simple sélection d’activité et d'extraction des caractéristiques locales du signal d'entrée. Afin d'analyser correctement le signal échantillonné par la LCSS, une solution efficace est proposée. Une comparaison de la technique proposée est effectue avec le GDFT et l’algorithme du Lomb. Il est fait du point de vue de la qualité du spectre et la complexité du calcul. Les résultats montrent que la technique proposée donne un gain drastique en complexité du calcul et à même temps offrant des résultats de meilleure qualité. L'approche proposée est particulièrement bien adaptée pour les signaux qui restent constants la plupart du temps, et varient sporadiquement. Il est démontré que pour ce type des signaux la technique proposée amène un gain drastique en complexité du calcul par rapport aux approches classiques. Basé sur l’approche proposée, les techniques des filtrages aux taux adaptatifs sont proposées. Les complexités de calcul des techniques proposées sont déduites et comparées avec la technique classique. Il est démontré que les techniques proposées obtiennent un gain de plus d'un ordre en termes des additions et des multiplications par rapport à la technique classique. Il est réalisé grâce à la nature non-stationnaire, qui permet d’adapter leur fréquence d’échantillonnage et l’ordre de filtre salon les caractéristiques locales du signal d’entrée. Pour une caractérisation propre de ces signaux non-stationnaires, une représentation temps-fréquence est nécessaire. Dans ce contexte, l’ARSTFT a été développé. Il est démontre que l’ARSTFT est plus performant que la STFT. Le premier avantage de l’ARSTFT sur la STFT est la résolution adaptative en temps-fréquence et le deuxième avantage est le gain en calcul. Ces caractéristiques « intelligentes » de l’ARSTFT sont atteintes en adaptant sa fréquence d’échantillonnage et la forme et la longueur de la fenêtre salon les caractéristiques locales du signal d’entrée.

Saeed Mian Qaisar Grenoble INP 206

Page 226: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

Les composants de base employés dans la chaque solution proposée sont le LCADC, l’algorithme de la sélection d’activité et le re-échantillonnage. Une combinaison intelligente de ces outils est nommée comme l’ARADC (Adaptive Rate ADC). Il est montré que pour un choix approprié de Ttimer, !a et l’ordre de l’interpolation, l’ARADC atteint une résolution effective plus élevée par rapport à un ADC classique, pour chaque valeur donne de M. Ici, M démontre la résolution définie du convertisseur en terme de bits. Pour une application ciblée un ensemble approprié des paramètres (M, Ttimer, !a et l’ordre d’interpolation) devrait être trouvé, qui offre un meilleur compromis entre la complexité de calcul du système et la qualité rendue de la sortie, tout en assurant la propre reconstruction du signal. L'efficacité des techniques proposées pour les signaux réels est évaluée. Il est démontré que ces techniques sont bien adaptées pour l’acquisition, le traitement et l’analyse des signaux non-stationnaire à la faible activité. Dans cette situation, ils peuvent atteindre une forte efficacité de calcul par rapport aux approches classiques, tout en offrant des résultats de qualité comparable.

Perspective

Une architecture au niveau système a été proposée pour les techniques de filtrage au taux adaptatif developé. Sa mise en œuvre et l'évaluation de la performance au niveau du circuit est une tâche à venir. Le développement et la mise en œuvre d'une architecture appropriée pour l’ARSTFT sont aussi un travail à venir. La fréquence d'échantillonnage des LCADCs est fonction de M et des variations locales du signal d'entrée. Il montre une façon d'employer des non-uniformités du signal échantillonnées avec des LCADCs, pour estimer la fréquence instantanée du signal d'entrée. Une possibilité de réaliser cette idée est présentée dans [102]. Cette approche est très intéressante et fournit des résultats précis pour les signaux simples. Développement d'une solution généralisée est une perspective. La LCSS et la sélection d'activité permettent l'estimation de la fréquence ou l'amplitude instantanée de la sinusoïde d'entrée. Il montre une possibilité de développer les techniques du demodulation de la fréquence et de l'amplitude. Une telle approche permet de rendre un simple circuit du démodulateur en éliminant les exigences comme un mécanisme de synchronisation au récepteur. C’est un domaine pour des recherches futures. La ré-échantillonnage des données obtenu avec la LCSS est nécessaire dans les solutions proposées. Il est montré que pour des parametres donnes (M, Ttimer et !a), un interpolateur de l'ordre approprié doivent être employés pour atteindre la meilleure résolution effective d’ARADC. Une performance similaire peut être réalisée avec un interpolateur d’ordre inférieur. Il peut être fait en utilisant la symétrie au cours du processus d'interpolation, ce qui résulte en un ré-échantillonnage avec l’erreur réduit [103, 104]. Les avantages et les inconvénients de cette approche sont sous enquête et une description sur ce sujet est donnée dans [64, 65]. Poursuite de l'élaboration de cette approche est une perspective. Dans l'approche proposée, LCADCs sont utilisés pour la convertion analogique numérique du signal d’entrée. Une comparaison des LCADCs avec les convertisseurs sigma-delta a été faite. Un avantage des convertisseurs sigma-delta par rappoer aux LCADCs est le phénomène de noise-

shaping. Il donne un gain drastique de l’ENOB avec l'augmentation de l'OSR (Over Sampling Ratio) et l’ordre du modulateur. L'introduction de cette propriété dans les LCADCs pourrait être

Saeed Mian Qaisar Grenoble INP 207

Page 227: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

vraiment bénéfique. Une étude sur les avantages et les inconvénients de cette approche ouvre un nouveau domaine de recherche. En conclusion, je peux dire que bien qu'il y ait encore beaucoup à faire et qu’il y a beaucoup de possibilités pour de nouvelles recherches, j'espère que ma petite contribution pourra aider ceux qui me suivront.

Saeed Mian Qaisar Grenoble INP 208

Page 228: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

REFERENCES

[1] E.C. Ifeachor and B.W. Jervis, “Digital Signal Processing: A Practical Approach”, Prentice-Hall, 2001. [2] A.V. Oppenheim and R.W. Schafer, “Digital Signal Processing”, Prentice-Hall, 1975. [3] C.E. Shannon, “Communication in the Presence of Noise”, Proc. IRE, vol.37, pp.10-21, 1949. [4] F.J. Beutler, “Alia-Free Randomely Timed Sampling of Stochastic Process”, IEEE, Trans. Info. Theory, vol. 16, pp. 147-152, 1970. [5] A.J. Jerri, “The Shannon Sampling Theorem___ Its Various Extensions and Applications: A Tutorial Review”, Proc. IEEE, vol. 65, pp. 1565-1596, 1977. [6] F. Marvasti, “A Unified Approach to Zero-Crossing and Nonuniform Sampling”, Oak Park, Illinois: Nonuniform Publications, 1987. [7] I. Bilinskis and A. Mikelsons, “Randomized Signal Processing”, Cambridge, Prentice Hall, 1992. [8] I. Bilinskis, “Digital alias free signal processing”, John Wiley and Sons, 2007. [9] M. Unser, “Sampling___ 50 Years After Shannon”, Proc. IEEE, vol. 88, No. 4, pp. 569-587, April 2000. [10] R.J. Martin, “Irregularly Sampled Signals: Theories and Techniques for Analysis”, PhD dissertation, University College London, 1998. [11] L. Fontaine, “Traitement des Signaux à Echantillonnage Irrégulier. Application au Suivi Temporel de Paramètres Cardiaques”, PhD dissertation, Institute National Polytechnique de Lorraine, 1999. [12] J.J. Wojtiuk, “Randomized Sampling for Radio Design”, PhD dissertation, University of South Australia, 2000. [13] F. Aeschlimann, “Traitement du Signal Echantillonné Non Uniformément: Algorithme et Architecture”, PhD dissertation, Institute National Polytechnique de Grenoble, 2006. [14] S.W. Smith, “Scientists and Engineers Guide to Digital Signal Processing”, 2nd Edition, California Technical Publishing, 1999. [15] A.V. Openheim and R.W. Schafer, “Discrete-Time Signal Processing”, Prentice-Hall, Second Edition, New Jersey, ISBN 0-13-083443-2, 1999. [16] W. Kester, “Data conversion handbook”, Elsevier/Newnes, ISBN 0-7506-7841-0, 2005. [17] R.H. Walden, “Analog-to-Digital converter survey and analysis”, IEEE Journal on Selected Areas in Communications, vol. 17, No. 4, pp. 539-550, April 1999. [18] W.R. Bennett, “Spectra of Quantized Signals,” Bell System Technical Journal, vol. 27, p. 446-471, July 1948. [19] B.M. Oliver, J.R. Pierce and C.E. Shannon, “The Philosophy of PCM”, Proceedings IRE, vol. 36, pp. 1324-1331, November 1948. [20] W.R. Bennett, “Noise in PCM Systems”, Bell Labs Record, vol. 26, p. 495-499, December 1948. [21] H.S. Black and J.O. Edson, “Pulse Code Modulation”, AIEE Transactions, vol. 66, pp. 895-899, 1947. [22] H.S. Black, “Pulse Code Modulation”, Bell Labs Record, vol. 25, pp. 265-269, July 1947. [23] K.W. Cattermole, “Principles of Pulse Code Modulation”, Elsevier, ISBN 444-19747-8, New York, 1969.

Saeed Mian Qaisar Grenoble INP 209

Page 229: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

[24] D.A. Johns and K. Martin, “Analog Integrated Circuit Design”, John Wiley & Sons, Canada, 1997. [25] P.G.A. Jespers, “Integrated Converters, D to A and A to D Architectures, Analysis and Simulations”, Oxford University Press, 2001. [26] Y. Gendai et al. “An 8-b 500-MHz Flash ADC”, IEEE Int. Solid State Circuit Conf. pp. 172-173, San Francisco, February 1991. [27] C. Donovan and M.P. Flynn “A Digital 6-bit ADC in 0.25- m CMOS”, IEEE Journal of Solid State Circuit, pp. 432-437, vol. 37, San Francisco, March, 2002. [28] P. Scholtens and M. Vertregt “A 6-bit 1.6 GSamples/s Flash ADC in 0.18 m CMOS Using Average Termination”, IEEE Int. Solid State Circuit Conf., San Francisco, February 2002. [29] B.P Ginsburg and A.P. Ghandrakasan, “Dual Scalable 500MS/s, 5b Time-Interleaved SAR ADCs for UWB Applications”, IEEE Custom Integrated Circuit Conference, pp. 403- 406, September 2005. [30] B.P Ginsburg and A.P. Ghandrakasan, “Dual Time-Interleaved Successive Approximation Register ADCs for an Ultra-Wideband Receiver”, IEEE Journal of Solid State Circuit, pp. 247-257, vol. 24, Februarys 2007. [31] P.H. Le, J. Singh et al. “Ultra-Low-Power Variable-Resolution Successive Approximation ADC for Biomedical Application”, IEEE Electronics Letters, vol. 41, pp. 634- 635, May 2005. [32] W.C. Goeke, “Continuously Integrating High-Resolution Analog-to-Digital Converter”, United States Patent # 5117227, May 1992. [33] T. Fusayasu, “A Fast Integrating ADC Using Precise Time-to-Digital conversion”, IEEE Nuclear Science Symposium, pp. 302-304, Honolulu, Hawaii, USA, 2007. [34] P.M. Figueiredo et al., “A 90nm CMOS 1.2v 6b 1GS/s Two-Step Subranging ADC”, IEEE Int. Solid State Circuit Conf., pp. 2320- 2329, 2006. [35] D.J. Huber et al., “A 10b 160MS/s 84mW 1V Subranging ADC in 90nm CMOS”, IEEE Int. Solid State Circuit Conf., 2007. [36] L. Zhen et al., “Low-Power CMOS Folding and Interpolating ADC with a Fully-Folding Technique”, ASICON’07, pp. 265-268, 2007. [37] X. Zhu et al., “An 8-b 1-GSmaples/s CMOS Cascaded Folding and Interpolating ADC”, EDST’07, pp. 177-180, 2007. [38] L. Jipeng et al., “A 0.9-V 12-mW 5-MSPS Algorithmic ADC with 77-dB SFDR”, IEEE Journal of Solid State Circuits, vol. 40, pp. 960-969, 2005. [39] B. Esperanca et al., “Power-and-Area Efficient 14-bit 1.5 MSample/s Two-Stage Algorithmic ADC Based on a Mismatch-Insensitive MDAC”, ISCAS’08, pp. 220-223, 2008. [40] B. Murmann et al., “A 12-bit 75-MS/s Pipelined ADC Using Open-Loop Residue Amplification”, IEEE Journal of Solid State Circuits, vol. 38, pp. 2040-2050, 2005. [41] K. Gulati et al., “A Highly-Integrated CMOS Analog Baseband Transceiver with 180MSPS 13b Pipelined CMOS ADC and Dual 12b DACs”, Customs Integrated Circuits Conference, pp. 515-518, Acton, MA, USA, 2005. [42] L. Williams, “Modeling and Design of High Resolution Sigma-Delta Modulators”, PhD dissertation, Stanford University, 1993. [43] D. Welland et al. “A Stereo 16-Bit Delta-Sigma A/D Converter for Digital Audio”, Journal of the Audio Engineering Society, vol. 37, pp. 476-485, June 1989. [44] P.M. Aziz et al. “An Overview of Sigma-Delta Converters: How a 1-bit ADC Achieves more than 16-bit Resolution”, IEEE Signal Processing Magazine, vol. 13, pp. 61-84, 1996.

Saeed Mian Qaisar Grenoble INP 210

Page 230: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

[45] J.C. Candy and G.C. Temes, “Oversampling methods for A/D and D/A conversion” in Oversampling Delta-Sigma Data Converters, pp. 1-25, IEEE Press, 1992. [46] B. Leung, “Theory of Sigma-Delta Analog to Digital Converter,” IEEE International Symposium on Circuits and Systems, Tutorial, pp. 196-223, 1994. [47] A. R. Feldman, “High-Speed, Low-Power Sigma-Delta Modulators for RF Baseband Channel Applications,” PhD dissertation, University of California, Berkeley, 1998. [48] J. Candy, “A Use of Double Integration in Sigma Delta Modulation,” IEEE Transactions on Communications, pp. 249-258, March 1985. [49] Y. Matsuya et al., “A 16-bit Oversampling A/D Conversion Technology Using Triple Integration Noise Shaping,” IEEE Journal of Solid State Circuits, vol. 22, pp. 921-929, December, 1987. [50] W. Chou and R.M. Gray, “Dithering and its Effects on Sigma-Delta and Multi Stage Sigma-Delta Modulations” Proceeding of the International Symposium on Circuits and Systems, pp. 368-371, May, 1990. [51] E. Allier, “Interface Analogique Numerique Asynchrone : Une Nouvelle Classe de Convertisseurs Bases Sur la Quantification du Temps”, PhD dissertation, Institute National Polytechnique de Grenoble, 2003.

[52] N. Sayiner, “A level-Crossing Sampling Scheme for A/D Conversion”, PhD dissertation, University of Pennsylvania, 1994.

[53] E. Allier et al., “A 120nm Low Power Asynchronous ADC”, International Symposium on Low Power Electronics and Design, pp. 60-65, 2005. [54] ] N. Sayiner, H.V. Sorensen and T.R. Viswanathan, “A Level-Crossing Sampling Scheme for A/D Conversion”, IEEE Transactions on Circuits and Systems, vol. 43, pp. 335-339, April 1996. [55] E. Allier, G. Sicard, L. Fesquet and M. Renaudin, “A New Class of Asynchronous A/D Converters Based on Time Quantization”, ASYNC’03, pp. 197-205, May 2003. [56] F. Akopyan, R. Manohar and A.B. Aspel, “A Level-Crossing Flash Analog-to-Digital Converter”, ASYNC’06, pp.12-22, March 2006. [57] A. Baums, U. Grunde and M. Greitans, “Level-Crossing Sampling Using Microprocessor Based System”, ICSES’08, pp. 19-22, Krakow, Poland, September 2008. [58] S.M. Qaisar, L. Fesquet and M. Renaudin “Spectral Analysis of a Signal Driven Sampling Scheme”, EUSIPCO’06, September 2006. [59] S.M. Qaisar, L. Fesquet and M. Renaudin, “Adaptive Rate Filtering for a Signal Driven Sampling Scheme”, ICASSP’07, pp. 1465-1468, April 2007. [60] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate Sampling and Filtering”, EUSIPCO’07, pp. 2139-2143, September 2007. [61] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate Sampling and Filtering for Low Power Embedded Systems”, SampTA’07, June 2007. [62] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Resolution Short-Time Fourier Transform”, EURASIP, Research Letters in Signal Processing, 2008. [63] S.M. Qaisar, L. Fesquet and M. Renaudin, “Computationally Efficient Adaptive Rate Sampling and Adaptive resolution Analysis”, Proc. WASET, vol. 31, pp. 85-90, 2008. [64] S.M. Qaisar, L. Fesquet and M. Renaudin, “An Improved Quality Adaptive Rate Filtering Technique Based on the Level Crossing Sampling”, Proc. WASET, vol. 31, pp. 79-84, 2008.

Saeed Mian Qaisar Grenoble INP 211

Page 231: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

[65] S.M. Qaisar, L. Fesquet and M. Renaudin, “An Improved Quality Adaptive Rate Filtering Technique For Time Varying Signals Based on the Level Crossing Sampling”, ICSES’08, September 2008. [66] Saeed M. Qaisar, L. Fesquet and M. Renaudin, “A Signal Driven Adaptive Resolution Short-Time Fourier Transform”, International Journal of Signal Processing, vol. 5, No. 3, pp. 180-188, 2009. [67] Saeed M. Qaisar, L. Fesquet and M. Renaudin, “Signal Driven Sampling and Filtering A Promising Approach For Time Varying Signals Processing”, International Journal of Signal Processing, vol. 5, No. 3, pp. 189-197, 2009. [68] P.H. Ellis, “Extension of Phase Plane Analysis to Quantized Systems”, IRE transactions on Automatic Control, vol.AC (4), pp. 43-59, 1959. [69] M. Miskowicz, “Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion”, Sensors, vol. 7, pp. 16-37, 2007. [70] S.C. Sekhar and T.V. Sreenivas, “Auditory Motivated Level-Crossing Approach to Instantaneous Frequency Estimation”, IEEE Trans. On Signal Processing, vol. 53, pp. 1450-1462, 2005. [71] J.W. Mark and T.D. Todd, “A Nonuniform Sampling Approach to Data Compression”, IEEE Transactions on Communications, vol. COM-29, pp. 24-32, January 1981. [72] A. Zakhor and A.V. Oppenheim, “Reconstruction of Two-Dimensional Signals from Level Crossings”, Proc. of IEEE, vol. 78, no. 1, pp. 31-55, January 1990. [73] M. Lim and C. Saloma, “Direct Signal Recovery from Threshold Crossings”, Phys. Rev. E 58, pp. 6759-6765, 1998. [74] M. Miskowicz, “Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion”, Sensors, vol. 7, pp. 16-37, 2007. [75] K.J. Astrom and B. Bernhardsson, “Comparison of Periodic and Event Based Sampling for First-Order Stochastic Systems”, Proc. of IFAC World Congress-99, pp. 301-306, 1999. [76] M. Miskowicz, “Send-on-Delta Concept: An Event-Based Data Reporting Strategy”, Sensors, vol. 6, pp. 49-63, 2006. [77] P. Otanez, J. Moyne and D. Tilbury, “Using Deadbands to Reduce Communication in Networked Control Systems”, Proc. of American control conference’02, pp. 3015-3020, 2002. [78] S.C. Gupta, “Increasing the Sampling Efficiency for a Control System”, IEEE transactions on automatic and control, pp. 263-264, 1963. [79] K. M. Guan and A.C. Singer. “Opportunistic Sampling by Level-Crossing”, ICASSP’07, pp. 1513-1516, April 2007. [80] M. Greitans, “Time-Frequency Representation Based Chirp Like Signal Analysis Using Multiple Level Crossings”, EUSIPCO’07, pp. 2154-2158, September 2007. [81] R.N. Bracewell, “The Fourier Transform and its Applications”, Boston, McGraw-Hill, 2000. [82] S.C. Sekhar and T.V. Sreenivas, “Auditory Motivated Level-Crossing Approach to Instantaneous Frequency Estimation”, IEEE Trans. On Signal Processing, vol. 53, pp. 1450-1462, 2005. [83] F. Aeschlimann, E. Allier, L. Fesquet and M. Renaudin, “Asynchronous FIR Filters, Towards a New Digital Processing Chain”, ASYNC’04, pp. 198-206, April 2004. [84] F. Aeschlimann, E. Allier, L. Fesquet and M. Renaudin, “Spectral Analysis of Level Crossing Sampling Scheme”, SampTA’05, July 2005. [85] Yu Hen Hu, “Programmable Digital Signal Processors: Architecture, Programming and Application”, Marcel Dekker Inc., USA, 2002.

Saeed Mian Qaisar Grenoble INP 212

Page 232: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Résumé étendu en français

[86] R.W. Robert, “The FFT Fundamentals and Concepts”, Prentice-Hall, New Jersey, 1998. [87] N.R. Lomb, “Least-Squares Frequency Analysis of Unequally Spaced Data”, Astrophysics and Space Science, vol. 39, pp. 447-462, 1976. [88] F.J.M. Barning, “The Numerical Analysis of the Light-Curve of 12 Lacerate”, Bulletin of the Astronomical Institute of Netherlands, pp. 22-28, 1963. [89] P. Vanicek, “Further Development and Properties of the Spectral Analysis by Least-Squares Fit”, Astrophysical and Space Sciences, pp. 10-33, 1971. [90] J.D Scargle, “Statistical Aspects of Spectral Analysis of Unevenly Spaced Data”, Astrophysical Journal, vol. 263, pp. 835- 853, 1982. [91] S. de Waele and P.M.T.Broersen, “Time Domain Error Measures for Resampled Irregular Data”, IEEE Transactions on Instrumentation and Measurements, pp.751-756, Italy, May 1999. [92] S. de Waele and P.M.T.Broersen, “Error Measures for Resampled Irregular Data”, IEEE Transactions on Instrumentation and Measurements, vol. 49, No. 2, April 2000. [93] A.V. Openheim, A.S. Willsky and I.T. Young,“Signals and Systems”, Prentice-Hall, 1995. [94] M. Vetterli, “A theory of Multirate Filter Banks”, IEEE Transaction on Acoustic, Speech and Signal Processing, vol. 35, pp. 356-372, March 1987. [95] S. Chu and C.S. Burrus, “Multirate Filter Designs Using Comb Filters”, IEEE Transaction on Circuits and Systems, vol. 31, pp. 913-924, November 1984. [96] Z. Duan, J. Zhang, C. Zhang and E. Mosca, “A Simple Design Method of Reduced-Order Filters and its Application to Multirate Filter Bank Design”, Elsevier Journal of Signal Processing, vol. 86, pp. 1061-1075, 2006. [97] J.E. Purcell, “Multirate Filter Design: An Introdction”, Multimedia System Design Magazine, pp. 32-40, 1998. [98] P.P. Vaidyanathan, “Multirate Systems and Filter Banks”, Prentice-Hall, Englewood Cliffs, New Jersey, 1993. [99] D. Gabor, “Theory of Communication”, Journal of IEE, vol. 93(3), pp. 429-457, 1946. [100] F.J. Beutler, “Error Free Recovery from Irregularly Spaced Samples”, SIAM Review, vol. 8, pp. 328-335, 1996. [101] F. Marvasti, “Nonuniform Sampling Theory and Practice”, Kluwer academic/Plenum Publisher, New York, 2001. [102] R. Shavelis and M. Greitans, “Spline-Based Signal Reconstruction Algorithm from Multiple Level Crossing Samples”, SampTA’07, June 2007. [103] D.M. Klamer and E.Marsy, “Polynomial Interpolation of Randomly Sampled Band-Limited Functions and Processes”, SIAM Review, vol. 42, No. 5, pp. 1004-1019, October 1982. [104] F. B. Hildebrand, “Introduction to Numerical Analysis”, McGraw-Hill, 1956.

Saeed Mian Qaisar Grenoble INP 213

Page 233: INSTITUT POLYTECHNIQUE DE GRENOBLEtima.univ-grenoble-alpes.fr/publications/files/th/2009/... · 2009-06-02 · INSTITUT POLYTECHNIQUE DE GRENOBLE N° attribué par la bibliothèque

Echantillonnage et Traitement Conditionnes par le Signal : Une Approche Prometteuse Pour des

Traitements Efficace à Pas Adaptatifs

Résumé :

Les récentes sophistications dans le domaine des systèmes mobiles et des réseaux de capteurs demandent de

plus en plus de ressources de traitement. En vue de maintenir l'autonomie de ces systèmes, minimiser l'énergie

est devenu l'un des défis les plus difficiles pour les industriels. Partant de ce constat, nous avons amélioré

l'efficacité énergétique en adaptant l'activité du système en fonction des caractéristiques locales du signal

d'entrée. L'échantillonnage et la sélection des parties actives du signal ainsi que l'extraction de paramètres locaux

sont les bases de la chaîne de traitement proposée.

Sur la base de la chaîne de traitement proposée, des techniques de filtrage adaptatifs et d’analyses à

résolution adaptative sont développées. Les solutions proposées sont de nature non stationnaire et elles adaptent

leur charge de traitement et la résolution en temps-fréquence en fonction du signal d'entrée. Il est démontré que

ces techniques permettent des gains drastiques en calcul tout en fournissant des résultats de qualité appropriée

par rapport aux approches classiques.

La chaîne de traitement proposée a été également caractérisée en termes de résolution effective. Par ailleurs,

il semblerait que pour un choix approprié de la résolution en temps et de l’ordre d'interpolation, la solution

proposée atteint une résolution effective plus élevée que celle de l'approche classique.

Mots clés —Echantillonnage par traversée des niveaux, Conception asynchrone, Sélection d’activité, Filtrage

aux taux adaptatives, Analyse spectrale à résolutions adaptatives, Complexité de calcul.

Signal Driven Sampling and Processing: A Promising Approach for Computationally Efficient Adaptive

Rate Solutions

Abstract:

The recent sophistications in the areas of mobile systems and sensor networks demand more and more

processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most

difficult industrial challenges, in mobile computing. In this context, we aim to achieve power efficiency by

smartly adapting the system processing activity in accordance to the input signal local characteristics. The

activity acquisition and selection along with local parameters extraction are basis of the proposed processing

chain.

Based upon the proposed processing chain the adaptive rate filtering and the adaptive resolution analysis

techniques are devised. The proposed solutions adapt their processing load and time-frequency resolution by

following the input signal variations. It demonstrates that they achieve drastic computational gains while

providing appropriate quality results compared to the counter classical approaches.

The proposed processing chain performance is also characterized in terms of its effective resolution. It is

shown that for an appropriate choice of time resolution and interpolation order, the proposed system achieves a

higher effective resolution compared to the counter classical one.

Keywords—Level Crossing Sampling, Asynchronous Design, Activity Selection, Adaptive Rate Filtering,

Adaptive Resolution Analysis, Computational Complexity.

Thèse préparée au laboratoire TIMA (Techniques de l’Informatique et de la Microélectronique pour l’Architecture des systèmes intégrés), Grenoble INP, 46 avenue Félix Viallet, 38031, Grenoble Cedex, France.

ISBN : 978-2-84813-132-0