THE PROBLEM OF ON-LINE TESTING METHODS IN APPROXIMATE DATA PROCESSING

Similar documents
GOLOMB Compression Technique For FPGA Configuration

Reduction of Bitstream Transfer Time in FPGA

A 64 Bit Pipeline Based Decimal Adder Using a New High Speed BCD Adder

Adiabatic Switching. A Survey of Reversible Computation Circuits. Benjamin Bobich, 2004

An Architecture for Combined Test Data Compression and Abort-on-Fail Test

Design-for-Testability for Path Delay Faults in Large Combinational Circuits Using Test-Points

C. Mokkapati 1 A PRACTICAL RISK AND SAFETY ASSESSMENT METHODOLOGY FOR SAFETY- CRITICAL SYSTEMS

Safety Critical Systems

Designing of Low Power and Efficient 4-Bit Ripple Carry Adder Using GDI Multiplexer

Design of Low Power and High Speed 4-Bit Ripple Carry Adder Using GDI Multiplexer

Advanced Test Equipment Rentals ATEC (2832) OMS 600

XII.A-D. Basic Attitude Instrument Flight

COMPRESSION OF FPGA BIT STREAMS USING EFFECTIVE RUN LENGTH ENCODING TECHIQUES AND ITS PERFORMANCE ESTIMATION

ACCURATE PRESSURE MEASUREMENT FOR STEAM TURBINE PERFORMANCE TESTING

Simple Time-to-Failure Estimation Techniques for Reliability and Maintenance of Equipment

Information Technology for Monitoring of Municipal Gas Consumption, Based on Additive Model and Correlated for Weather Factors

Algebra I: A Fresh Approach. By Christy Walters

HIGH RESOLUTION DEPTH IMAGE RECOVERY ALGORITHM USING GRAYSCALE IMAGE.

Simplifying Radical Expressions and the Distance Formula

Which Aspects are able to Influence the Decision in Case of the Bids for the Olympic Games 2024?

Algebra I: A Fresh Approach. By Christy Walters

Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

Risk-based method to Determine Inspections and Inspection Frequency

You are to develop a program that takes as input the scorecard filled out by Bob and that produces as output the correct scorecard.

VI.A-E. Basic Attitude Instrument Flight

Using Markov Chains to Analyze a Volleyball Rally

REASONS FOR THE DEVELOPMENT

FIRE PROTECTION. In fact, hydraulic modeling allows for infinite what if scenarios including:

VLSI Design I; A. Milenkovic 1

CPE/EE 427, CPE 527 VLSI Design I L21: Sequential Circuits. Review: The Regenerative Property

Introduction to Scientific Notation & Significant Figures. Packet #6

Gas Module for Series GMS800

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 AUDIT TRAIL

PHYS Tutorial 7: Random Walks & Monte Carlo Integration

EFFICIENCY OF TRIPLE LEFT-TURN LANES AT SIGNALIZED INTERSECTIONS

hot hands in hockey are they real? does it matter? Namita Nandakumar Wharton School, University of

ROSE-HULMAN INSTITUTE OF TECHNOLOGY Department of Mechanical Engineering. Mini-project 3 Tennis ball launcher

Comparison of Wind Turbines Regarding their Energy Generation.

CT PET-2018 Part - B Phd COMPUTER APPLICATION Sample Question Paper

A quantitative software testing method for hardware and software integrated systems in safety critical applications

A Novel Decode-Aware Compression Technique for Improved Compression and Decompression

Opleiding Informatica

Fail Operational Controls for an Independent Metering Valve

Why We Should Use the Bullpen Differently

NIOSH Equation Outputs: Recommended Weight Limit (RWL): Lifting Index (LI):

VLSI Design 12. Design Styles

An Efficient Code Compression Technique using Application-Aware Bitmask and Dictionary Selection Methods

The Cooperative Cleaners Case Study: Modelling and Analysis in Real-Time ABS

A HYBRID METHOD FOR CALIBRATION OF UNKNOWN PARTIALLY/FULLY CLOSED VALVES IN WATER DISTRIBUTION SYSTEMS ABSTRACT

Simulation analysis of the influence of breathing on the performance in breaststroke

CMIMC 2018 Official Contest Information

Achieving Compliance in Hardware Fault Tolerance

Safety Manual. Process pressure transmitter IPT-1* 4 20 ma/hart. Process pressure transmitter IPT-1*

Software Reliability 1

Special edition paper

CHAPTER 1 INTRODUCTION TO RELIABILITY

SCIENCE OF TSUNAMI HAZARDS

Synchronous Sequential Logic. Topics. Sequential Circuits. Chapter 5 Steve Oldridge Dr. Sidney Fels. Sequential Circuits

Safety-Critical Systems. Rikard Land

Forecasting. Forecasting In Practice

Post-Placement Functional Decomposition for FPGAs

COMPARISON OF DIFFERENTIAL PRESSURE SENSING TECHNOLOGIES IN HOSPITAL ISOLATION ROOMS AND OTHER CRITICAL ENVIRONMENT APPLICATIONS

Journal of Emerging Trends in Computing and Information Sciences

Upgrading Vestas V47-660kW

NOVEL DESIGN FOR BCD ADDER WITH MINIMIZED DELAY

Safety Manual VEGAVIB series 60

Reliability of Safety-Critical Systems Chapter 3. Failures and Failure Analysis

Queue analysis for the toll station of the Öresund fixed link. Pontus Matstoms *

machine design, Vol.6(2014) No.3, ISSN pp

Failure Detection in an Autonomous Underwater Vehicle

WMO LABORATORY INTERCOMPARISON OF RAIN INTENSITY GAUGES

Characterizers for control loops

A Nomogram Of Performances In Endurance Running Based On Logarithmic Model Of Péronnet-Thibault

Comparing Indexes Among Primates

Lecture 04 ( ) Hazard Analysis. Systeme hoher Qualität und Sicherheit Universität Bremen WS 2015/2016

A SEMI-PRESSURE-DRIVEN APPROACH TO RELIABILITY ASSESSMENT OF WATER DISTRIBUTION NETWORKS

AN ANALYSIS ON HIGH PRESSURE DYNAMIC CALIBRATORS USED IN THE DEFENSE AREAS

Failure Modes, Effects and Diagnostic Analysis

Safe hydraulics for hydroforming presses. more finished product to be created from less raw material.

CT433 - Machine Safety

Safety Manual VEGAVIB series 60

Go forth, and Multiply!

Transposition Table, History Heuristic, and other Search Enhancements

GaitAnalysisofEightLegedRobot

RARE CONDITIONS - AN IMPORTANT CAUSE OF FAILURES. Herbert Hecht SoHaR Incorporated Beverly Hills, California

Quality Planning for Software Development

Purpose. Scope. Process flow OPERATING PROCEDURE 07: HAZARD LOG MANAGEMENT

Ascent to Altitude After Diving

Analysis of Variance. Copyright 2014 Pearson Education, Inc.

A real time vessel air gap monitoring system

Ch.5 Reliability System Modeling.

Regulations of the International Young Naturalists Tournament

GOLFER. The Golf Putting Robot

WATER OIL RELATIVE PERMEABILITY COMPARATIVE STUDY: STEADY VERSUS UNSTEADY STATE

Air Sensor. FCS Ex. Manual. AQ Elteknik AB

Analysis of Run-Off-Road Crashes in Relation to Roadway Features and Driver Behavior

Shoe-shaped Interface for Inducing a Walking Cycle

Iteration: while, for, do while, Reading Input with Sentinels and User-defined Functions

Safety Manual OPTISWITCH series relay (DPDT)

Building an NFL performance metric

Transcription:

THE PROBLEM OF ON-LINE TESTING METHODS IN APPROXIMATE DATA PROCESSING A. Drozd, M. Lobachev, J. Drozd Odessa National Polytechnic University, Odessa National I.I.Mechnikov University, Odessa, Ukraine, Drozd@ukr.net, Lobachev@ukr.net Abstract This paper is devoted to the on-line testing methods based on the self-checking techniques. These methods are aimed to estimate the reliability of a result calculated at the output of the circuit during operations. Definitions of the totally selfchecking circuit have fixed assumptions that limit development of the computing circuits in on-line testing within the framework of the exact data processing. The main part of the processed numbers belongs to the approximate data. The errors produced by the faults of the computing circuits are inessential for the reliability of approximated result in most cases. The on-line testing methods demonstrate a new property for rejecting the reliable results in case of inessential errors detection. This fact makes a problem of low reliability of result checking. The ways to increase reliability of on-line testing methods are offered. 1. Introduction The basic requirements imposed to on-line testing methods are formulated in definitions of totally self-checking circuits (TSC) [1]. A circuit is fault-secure for a set of faults F if for every fault in F the circuit never produces an incorrect codeword at the output for an input codeword. A circuit is self-testing for a set of faults F if for every fault in F the circuit produces a non-codeword at the output for at least an input codeword. If the circuit is both fault-secure and self-testing it is said to be totally self-checking. As can be seen from the above definitions, the on-line testing methods aim to detect the faults of a circuit during the main operations with actual data. The fault should be detected as soon as first erroneous operation result occurs [2]. The TSC definitions have played an important role in on-line testing development. Using parity, residue and other methods of checking, the following self-checking circuits were designed: combinational circuits, asynchronous and synchronous sequential machines [3 7]; self-checking Adders and ALUS, Multiply and Divide Arrays [8-10]. Now, the self-checking technique continues to define development of on-line testing. However, the TSC definitions also cause negative influence on on-line testing development. They have fixed the following assumptions: The correct circuit calculates a reliable result, and non-reliable result is received only on faulty circuit. Purpose of on-line testing is to detect a circuit fault and to estimate circuit reliability. On-line testing methods have to detect a fault using the first error produced in calculated result. These assumptions seem absolutely true, however their detailed research proves that they are true only in case of the exact data processing. In section 2 the purpose of the on-line testing is analyzed. It is late to detect the circuit faults during the actual data processing. The methods of on-line testing should be aimed to estimate reliability of calculated results. The model of the exact data, which is a reason of the nonactual purpose declaration in on-line testing, is considered in section 3. The features of approximate data that refute the first assumption are examined in section 4. For circuits that process approximate data, traditional on-line testing methods provide low reliability of the results checking. This problem is defined in Section 5. Various ways to increase the reliability of approximate results checking are offered in Section 6. 2. Purpose of on-line testing The first assumption asserts that the correct circuit calculates a reliable result, and non-reliable result is received only on faulty circuit. Is it really true? The truth is that the correct circuit is necessary only for calculating of reliable result, but itself the circuit doesn t have sense. The definitions of fault-secure and self-testing circuit exclude incorrect codeword and use non-codeword at the output for fault detection. According to these definitions, on-line testing is aimed to detect a circuit fault using actual data. The purpose of on-line testing is to estimate reliability of the circuit during operations, answering a question Is the circuit correct or not?

Is the declared purpose correct? To answer this question, we can present the process of calculations as the plane flight. Detection of the airplane faults should be carried out before the flight start. Search of faults during the flight would extremely surprise the passengers. The fault can be more efficiently detected during pauses of the operations using the off-line testing methods. During the operations the on-line testing methods should be aimed to estimate reliability of the results. What in practice on-line testing is used for? According to the TSC definitions the on-line testing method has to detect a fault using the first error produced in received result. As it is known, the errors are produced by transient and permanent faults. The transient faults are valid for a short time. The transient faults occur much more often than permanent faults. Therefore, as a rule, the first detected error is produced by transient fault, and after short period of time a circuit will be correct again [11, 12]. This is why the first error is necessary to detect only to estimate reliability of results. Is it possible to estimate reliability of the circuit after detection of the first error? If the first detected error is produced by the transient fault, the estimation that the circuit is faulty will not come true after short period of time. The first error detection is not enough to identity the permanent fault. It requires to detect many errors. Therefore, the first detected error cannot answer a question "Is the circuit faulty or not?" Thus, actually, the purpose of on-line testing is to estimate reliability of calculated results. 3. Model of Exact Data We have two purposes of on-line testing: the declared and the actual one. The declared purpose is to estimate circuit reliability, and actual purpose is to estimate result reliability. And now we have the following questions: Why non-actual purpose is declared? What is the reason to declare the non-actual purpose? How the declared and actual purposes can differ? This reason is the Model of Exact Data (MED). This model means that all numbers independently of their true nature can be considered as exact data. What is Exact Data? The Exact Data contain only integers by their nature that number the elements of a set by the ordinal numerals. For example, let s consider a set of n-bits codewords. They can be numbered using of the ordinal numerals as the first, the second end so on. This numbers are integer by nature. Representation of the data by codewords in design and test of the computing circuits has led to the development of the MED. Nobody declared this model, but it defines logic of self-checking circuit to calculate reliable result only on correct circuit; purpose of on-line testing for estimating reliability of a circuit by detecting its fault; the requirement to on-line testing methods to detect the first error produced by the circuit fault; on-line testing development only within the framework of the exact data processing. Codewords and non-codewords in definitions of faultsecure and self-testing circuits are exact data. They contain only exact bits. Any error in codeword bits makes it noncorrect and non-reliable. All errors are essential for reliability of an exact result. A detected error concurrently shows that the calculated result is non-reliable and the circuit has a fault. The main part of all numbers is approximate data. They contain all other numbers that are results of measurements and their processing. A significance of approximate data processing rapidly increases with the computers development. For example, Intel processors 286 and 386 were complemented in personal computers by outside coprocessors 287 and 387. Starting from processor Intel 486DX the inside coprocessors are used for floating-point operations. Pentium-processors have inside pipeline coprocessors. Is the on-line testing ready for such development? How capable the MED-based on-line testing methods are to support the approximate data processing? 4. Features of approximate data Approximate number A is represented more naturally as a product. For example, in floating-point formats [13]: A = S B Ex, (1) where S is mantissa or significant; B is a base of a numerical system; Ex is an exponent. Approximate calculations have the following features caused by a rounding of the data: deleting the low bits of the calculated result; the data processing in extended formats; matching the exponents. The first feature is coming from the theory of errors. According to this theory, an amount of exact bits in result does not exceed an amount of exact bits in operand. Therefore, the main floating-point formats have a single precision [14]. Approximate numbers processing necessarily contains a multiplication because it is contained in number notation (1). Multiplication doubles a size of the result compared with the operand. The low bits of a complete double-size product are non-exact, and are discarded in single formats. The second feature of approximate calculations is connected with violation of the associative law of arithmetic operations for the approximate data.

For instance, it is necessary to sum one million with one million of units by implementing the binary operations and n-bit codeword for mantissas representation, n < 20. Process of addition is shown in figure 1 for two variants of this task decision. + 1 + 1 + + 1 1 + 1 + 1 + 1 + + 1 + n < 20 2 2 4 Violation of the associative law for the approximate data Figure 1: Violation of the associative law 2 On the left, addition of one million to a unit makes the result equal to one million because the unit is lost during the exponents matching. One million of such operations also will make the result equal to the first number that is one million. To obtain the correct result it is necessary to change the order of operations. It is shown on the right. Firstly, all pairs of units should be summarized; secondly, results of previous calculations should be summarized. First number is summarized in the last addition. To restore the associative law, the size of the codeword should be increased by using of the extended formats [15]. This example shows that the correct circuit can calculate non-reliable result. Partially this denies the first assumption. The third feature of approximate calculations comes from the exponents matching. This action is made in frequently used operations like addition, subtraction and matching the numbers. Mantissa of a number with a smaller exponent is shifted down with loss of low bits. Then the low bits of results of all previous operations are excluded from calculations. Thus the approximate data are handled with the enlarged size of mantissas. Results of calculations are rounded off with loss of low bits. An approximate result has the high exact bits and nonexact bits in a low part. Definition. An error produced by a fault of the computing circuit, is called essential if it reduces an amount of exact bits in result, and otherwise it is called inessential. The part of essential errors in their total amount is reduced under impact of the following factors: errors exclusion at discarding the result bits; increasing the part of inessential errors by using the extended formats; exclusion of errors in the results of previous operations during the exponents matching. The first factor K1 defines a share of errors remained after exclusion of low bits by the following formula. K 1 = n / n С, K 1 1, where n and n С are the numbers of the remained and calculated result bits; For binary operations, n С = 2n, K 1 = 0.5. According to the first factor, a half of all errors is inessential. The faulty circuit can calculate the reliable result in case of inessential errors. It completely disproves the first assumption. The second factor K 2 can be estimated by the following formula: K 2 = n Т / n, K 2 1, where n Т is an amount of mantissa exact bits; n is the enlarged size of a mantissa in extended format. For instance, the formats for floating-point arithmetic of personal computers makes mantissa size 2,7 times greater (from 24 bits in a single format up to 64 bits in a double extended format). This fact defines that K 2 = 1/2,7 = 0,37. The use of a fourth Quadruple-Precision format makes the mantissa size 4,7 times more (up to 113 bits), and K 2 = 0,21 [15]. The third factor K 3 is estimated, considering shift of a mantissa with smaller exponent in case of matching the exponents. A shift of d positions leads to discarding of the d bits from 2 n bits in two n-bits operands. Therefore, in average, 0,5 d / n bits and errors in these bits are excluded from all results of previous operations. In case all values of d are equiprobable, then d = n / 2, and K 3 is calculated as: K 3 = 1 0,25 О С / О О, where О С is the hardware amount of computing circuit, preceding the mantissas shifter; О О is total amount of the computing circuit equipment. For several exponent-matching operations executed with this computing circuit, K 3 is defined as a product of the factors calculated for each of these operations. These factors can be considered as independently operating. Therefore a probability of the fact that an error is essential can be estimated by the following formula [16]: = K 1 K 2 K 3, (2) This formula shows that <<1. The main part of errors produced by the circuit fault in approximate data processing belongs to inessential ones. 5. Estimation of the result checking reliability Errors produced by the circuit faults can be essential or non-essential. The on-line testing methods can detect or skip these errors. It creates 4 variants of error consideration.

The probabilities of detection and skipping of essential and inessential errors with use of a unity-size square are shown in figure 2. That s why the actual splitting of the square into parts should be made as it is shown in figure 3. E R O R R Essential Non-essential = 1 -. Detection Skipping = 1 - P DE = P D 1 3 =P S = (1-P D ) P DN =P D = P D (1- ) 2 4 P SN =P S = (1-P D )(1- ) P DE P DN 2 P SN Figure 3: Actual estimation of the probabilities Figure 2: Probabilities of error detection and skipping On the horizontal side of the square we ll show probability of the fact, that an error is essential, and the probability = 1 of the fact, that an error is inessential. On the vertical side of the square we ll show probability P D of error detection and probability P S = 1 P D of error skipping. The square has 4 parts that define the probabilities connected by the following formula: P DE + P DN + + P SN = 1, where P DE is detection probability of an essential error; P DN is detection probability of inessential error; is skipping probability of an essential error; P SN is skipping probability of inessential error. These probabilities are defined as follows: P DE = P D ; P DN = P D (1 ); = (1 P D ) ; P SN = (1 P D ) (1 ). An on-line testing method is reliable in two cases: it detects an error in the result calculated by the circuit and this result is not reliable; it does not detect an error and the calculated result is reliable. Thus reliability index D RC of result checking can be defined using probabilities presented in the first and last parts of the square as follows: D RC = P DE + P SN = P D + (1 P D ) (1 ). (1) Modern on-line testing methods have high error detection probability P D. As it has been shown, the approximate results have low probability of the fact that an error is essential. The actual estimation of the probabilities shown in figure 3 allows to make the following conclusions: the on-line testing methods expose a new property to reject the reliable results in case inessential errors are detected; the probability P DN shown in the second part of the square is the highest; modern on-line testing methods have low reliability of the result checking mainly detecting inessential errors. In case of the declared purpose in on-line testing, the method should detect a fault of the circuit independently of error type (essential or inessential error). This defines the reliability index D FD of the on-line testing method as D FD = P DE + P DN. The index D FD is composed of the first and second parts of the square as it is shown in figure 4, a. In common case the highest probability P DN distinguishes this index from the index of reliability D RC. In case = 1 of the exact data all errors are essential, and the index D FD is identical to the index of reliability D RC as it is shown in figure 4, b. P DE 1 2 + P DN = D FD a P SN Figure 4: Common case (a) and the case (b) of exact data = 1 P DE = D FD = D RC b

Comparison of the common case with the case of the exact data allows to make the following conclusions: the declared purpose essentially differs from the actual purpose of on-line testing; the known reliability estimation of on-line testing methods is true only for a case of the exact data; in case of the exact data modern on-line testing methods have high reliability of the result checking; the on-line testing was developed for particular case of the exact data. e P DE = P De P DN P SN = P Sn n n= 1 - n 6. Ways to increase the reliability of the on-line testing methods The formula (1) defines three ways to increase the reliability of the on-line testing methods in result checking: increase of the probability ; decrease of the probability P D ; the detection of the essential and inessential errors with different probabilities e and n, where e > n. The first way improves reliability of known on-line testing methods with traditionally high error detection probability P D by the increasing the first part of the square (the probability P DE ). The second way improves reliability of the result checking by the increasing of the square last part (probability P SN ). The low probability P D creates conditions to design the simple error detection circuits. These ways are shown in figure 5 a and b. P DE a P DN P SN Figure 5: The first (a) and second (b) ways to improve reliability of the on-line testing methods The third way improves reliability of the result checking using high detection probability P De of an essential error and high skipping probability P Sn = 1 P Dn of inessential error as it is shown in figure 6. The first way has been successfully tested in the residue checking method developed for truncated arithmetic operations [17 19]. The second and third ways were implemented in the on-line testing method for a multiplier [20] and in the logarithmic check of arithmetic operations [21]. P DE b K S P DN P SN e Figure 6: The third way to improve reliability of the online testing methods 7. Results The basic result of this paper is the new view on online testing of computing circuits in the approximated data processing. It is shown in table 1 in comparison with old view. Table 1: Old and new view on on-line testing OLD VIEW On-line testing is developed for common case. A purpose of on-line testing is to estimate reliability of the computing circuit. NEW VIEW On-line testing is developed only for exact data. A purpose of on-line testing is to estimate reliability of the calculated result. All processed numbers Basically, processed are considered as the numbers are exact data. approximated data. All errors are essential for the result reliability. Basically, the errors are inessential. Traditional on-line Traditional on-line testing methods have testing methods have low high reliability detecting reliability of the result almost all errors and checking mainly detecting faults. inessential errors. An error is essential with probability = 1. The high probability P D of the error detection it is well. All errors should be detected with high probability. The first way to improve reliability of the result checking is to increase. The second way is to reduce P D. The third way is to detect the essential and inessential errors with different probabilities.

8. Conclusions We have presented a new view on on-line testing based on self-checking techniques applied to the computing circuits for approximate data processing. The analysis of the on-line testing purpose has shown, that during operations the on-line testing methods should be aimed to estimate reliability of the calculated results. Studying of features of the approximated calculations has shown, that the errors produced by faults of the computing circuits are inessential in most cases of their detection. The on-line testing methods demonstrate a new property to reject the reliable approximated results by detecting the inessential errors. The estimation of traditional on-line testing methods has shown their high reliability in case of the exact data processing. For the approximated calculations these methods have low efficiency. They basically detect inessential errors rejecting reliable results. Practice shows that the data can be considered as exact one for a wide range of tasks and it allows to recalculate the rejected reliable results without large losses. However the rejection of reliable results can cost very expensive in the critical applications area. The importance of the approximate data processing constantly grows, and the area of critical applications extends. Therefore it is necessary to improve reliability of the on-line testing methods in the result checking. References [1] D. A. Anderson and G. Metze, Design of Totally Self-Checking Circuits for n-out-of-m Codes. IEEE Trans. on Computers, vol. C-22. pp. 263 269, 1973. [2] M. Favalli and S. Metra, Optimization of Error Detecting Codes for the Detection of Crosstalk Originated Errors, in Proc. of IEEE Design, Aut. and Test in Europe, Munich, Germany, pp. 290 296, 2001. [3] J. E. Smith, G. Metze, The design of totally selfchecking combinational circuits, in Proc. Int. Symposium on Fault Tolerant Computing Dig., Los Angeles, USA, pp. 130 134, 1977. [4] M. Diaz, P. Azema and J. M. Ayache, Unified Design of Self-Checking and Fail-Safe Combinational Circuits and Sequential Machines, IEEE Trans. Computers, vol. C-28, pp. 276 281, March 1979. 1977. [5] F. Ozguner, Design of totally self-checking asynchronous and synchronous sequential machines, in Proc. Int. Symposium on Fault Tolerant Computing Dig, Los Angeles, USA, pp. 124 129. [6] M. Nicolaidis and Y. Zorian, On-Line Testing for VLSI a Compendium of Approaches, Electronic Testing: Theory and Application (JETTA), vol. 12, pp. 7 20, 1998. [7] M. Favalli and S. Metra, Problems due to Open Faults in the Interconnections of Self-Checking Data Path, in Proc. of IEEE Design, Aut. and Test in Europe, Paris, France, pp. 612 617, 2002. [8] W. Jenkins, The Design of Error Checkers for Self- Checking Residue Number Arithmetic, IEEE Trans. on Computers, vol. C-32, pp. 388 396, 1983. [9] M. Nicolaidis, Efficient Implementation of Self- Checking Adders and ALUS, in Proc. 23th Fault Tolerant Computing Symposium, Toulouse, France, pp. 586 595, 1993. [10] M. Nicolaidis and H. Bedder, Efficient Implementation of Self-Checking Multiply and Divide Arrays, in Proc. European Design and Test Conf, Paris, France, pp. 134 137, 1994. [11] M. Nicolaidis, IP for Embedded Robustness, in Proc. of IEEE Design, Aut. and Test in Europe, Paris, France, pp. 240 241, 2002. [12] I. Alzacher Noufal and M. Nicolaidis, A CAD Framework for Generating Self-Checking Multipliers Based on Residue Codes, in Proc. of IEEE Design, Aut. and Test in Europe, Munich, Germany, pp. 122 129, 1999. [13] D. Goldberg, What Every Computer Scientist Should Know About Floating-Point Arithmetic, ACM Computer Surveys, vol. 23, no 1, pp. 5 18, 1991. [14] ANSI/IEEE Std 754-1985, IEEE Standard for Binary Floating-Point Arithmetic, 1985. [15] W. Kahan, IEEE Standard 754 for Binary Floating- Point Arithmetic, Lecture Notes on the Status of IEEE 754, Elect. Eng. & Computer Science University of California, Berkeley CA 94720-1776, May 1996. [16] A. Drozd, On-Line Testing of Computing Circuits at Approximate Data Processing, in Proc. East-West Design & Test Conference, Yalta-Alushta, Ukraine, pp. 113 116, 2003. [17] A. Drozd, M. Lobachev and W. Hassonah, Hardware Check of Arithmetic Devices with Abridged Execution of Operations, in Proc. European Design and Test Conf, Paris, France, p. 611, 1996. [18] A. Drozd and M. Lobachev, Efficient On-line Testing Method for Floating-Point Adder, in Proc. of IEEE Design, Aut. and Test in Europe, Munich, Germany, pp. 307 311, 2001. [19] A. Drozd, M. Lobachev and J. Drozd, Efficient Online Testing Method for a Floating-Point Iterative Array Divider, in Proc. of IEEE Design, Aut. and Test in Europe, Paris, France, p. 1127, 2002. [20] A. Drozd, Efficient Method of Failure Detection in Iterative Array Multiplier, in Proc. of IEEE Design, Aut. and Test in Europe, Paris, France, p. 764, 2000. [21] A. Drozd, R. Al-Azzeh, J. Drozd, M. Lobachev, The logarithmic checking method for on-line testing of computing circuits for processing of the approximated data, in Proc. of Euromicro Symposium on Digital System Design, Rennes, France, pp. 416 413, 2004.