15.10.2012 18:04

SPF™ Technology

What is SPF™ Technology ?

Innovative Technology’s bank note validators use SPF™ technology to validate bank notes. Sensor responses from the validators are digitized to produce a stream of numerical data. SPF™ uses mathematics to analyse this stream of numbers when deciding whether to reject a counterfeit or accept a genuine note. To maintain the industry-leading speed and accuracy of our validation products, it is vital the mathematical algorithms and techniques we use must remain at the cutting edge.

  

Describing Data

The single most important ingredient in the algorithm is the sensor data collected from genuine banknotes. It is vital to know exactly how genuine bank notes look in order to distinguish genuine notes from counterfeits. Genuine notes come in a wide variety of conditions depending on age and wear (refer to March Technical Bulletin for more details on banknote ageing) so whenever possible, we collect data from as wide a spectrum of notes as possible.

Supporting over 100 different currencies across our range of past and present products has required the collecting, storing and processing of thousands of gigabytes of data. Note data is categorised according to country, denomination, issue, face and orientation so once note data has been collected, a process is required to reduce the information contained in gigabytes of stored data to just a few hundred kilobytes, suitable for downloading onto a validator. This process is known as ‘training’ the dataset to recognise notes.

The algorithms used in SPF™ technology automatically identify the security features within the note data for distinguishing genuine notes from counterfeit and distinguishing different denominations. The algorithms convert the information of the security feature into a mathematical function, which can then be applied to any collected data to test that the security feature exists. If the function is well-designed, applying it to all our genuine note data would produce only a small range of values.  By analysing the distribution of these we can produce thresholds for this small range that account for the uncertainty implicit in our measurements.

This function, along with the thresholds, forms what we call a ‘test’.  When a validator reads the note, we can apply each test to the sensor data. If the resulting value lies within the thresholds, the data has passed the test, and this is taken as evidence that the inserted note is genuine. Conversely, if the value is outside the thresholds, this is taken as evidence that the note is not genuine. Typically, we will construct between 10 and 20 tests, depending on the number and efficacy of the security features on the note in question.

Combining Tests – Boosting

Most people, when given a banknote they suspect of being counterfeit, will run through a mental checklist to determine whether they believe the note to be genuine. 
“Does the paper feel like banknote paper?”
“Is there microtext visible in the strip?”
“Can I see a watermark?” 
“Is the registration between front and back accurate?” and so forth, which are analogous to the mathematical tests.

A genuine banknote may fail anyone of those tests – the paper texture might have changed due to some over-vigorous washing, the watermark may be obscured by a build-up of dirt – so the final determination as to whether to accept the note will be based on a combined impression of the answers to these questions.

SPF™ works in the same way.  If note data fails any specific test that is suspicious, but alone, is not enough. Fortunately there is a mathematical theorem that states that as long as our tests are correct, more often pure guesswork, we can combine them into a test with much better performance. This process of combining multiple fallible tests into a single high performance algorithm is called Boosting.  In essence, each individual test is allowed to vote on whether to accept the note, giving each test a different amount of influence depending on how well it performed during training.


Validating the Validation

When we have our collection of tests the dataset is ready for testing.  When training the dataset, we withhold a proportion of the data from the training process.  After training, the tests are applied to that unseen data, a process known as cross validation. Only when the accuracy rate of this cross-validation reaches the required standards will the dataset be released to customers.