12-d4-1firstversionoftheiborderctrlsoftwareplatform-redacted-compressed

Dieses Dokument ist Teil der Anfrage „iBorderCtrl project documentation

/ 153
PDF herunterladen
D4.1 First version of the iBorderCtrl software platform


        •        Based on the Border Managers “Risk Indicators ” evaluation, RBAT will store these risk
        indicators to the “Risk ” database so as to be available to the Border Guard for examination at the
        border crossing point.




                                   Figure 17 – RBAT – Pre-registration Phase
Border Crossing Point (BCP)
At the Border Crossing Point, once all checks have been completed by the border guard for a traveller, an .xml
file containing a) all information stored to the iBorderCtrl database as entered by the traveller during the pre ­
registration phase and b) all risk scores provided by the other border crossing check modules (DAAT, BIO,
FMT, HHD) for this traveller and stored to the “Risk ” database, is “pushed ” to RBAT. The same procedure is
followed for each traveller going through checks at the borders.
Once the RBAT engine receives the .xml file with the above mentioned data concerning a traveller will then:
        •        Produce     the overall risk for this traveller using the weight-based   algorithm and store it to the
        “Risk ” database.
        •        Produce  Risk indicators by comparing the traveller related data to the existing rules authored
        by Border Managers to examine the possibility of a match. Regarding the border crossing phase where
        the border guard must take a direct decision for the traveller ’s admission or entry refusal, and is
        necessary to have all data available, the Border Managers don ’t have the time to evaluate the produced
        “Risk Indicators ” (synchronous check).
        •        RBAT will, as a result, store all produced “Risk Indicators ” to the “Risk ” database so as to be
        available to the Border Guard (for examination).




                                                            55
55

D4.1 First version of the iBorderCtrl software platform




                                   Figure 18 RBAT – Border crossing Phase


       3.2   Multi Criteria Decision Analysis (MCDA) - Steps for the “weight ”
       determination of each iBorderCtrl module
In order to calculate the overall risk of each traveller crossing the borders based on individual risk scores
produced by other iBorderCtrl modules/tools, the significance and respective importance ( “weight ”) of each
module to RBAT is going to be determined using the Multiple-criteria decision analysis (MCDA)9 technique.
MCDA is a sub-discipline 10 of operations research that explicitly evaluates multiple (sometimes conflicting)
criteria in decision making. Using MCDA can be said 11 to be a way of dealing with complex problems by
breaking the problems into smaller pieces. After weighing some considerations and making judgements about
smaller components, the pieces are reassembled to present an overall picture to the decision maker. Most of
MCDAmethods deal with discrete alternatives, which are described by a set of criteria. Information could be
determined exactly or could be fuzzy, determined in intervals.
Establish the decision context
This approach main purpose is the identification of the risk input tools/ parameters and the assessment of
these options based on some generic identified criteria. The final aim is to prioritise the significance (weight)
of each option regarding the final risk calculation.
All technical partners participated in the above procedure as all perspectives on the subject of the analysis
should be covered and each one expertise (on their tool or technology) led to useful and significant
contributions to the MCDA. Moreover, all agreed with the result of the prioritization analysis and weight
definition process.
The MCDA is structured     to:



9 “Multi-criteria analysis: a manual ”, Department for Communities and Local Government: London, 2009
10 https://en.wikipedia.org/wiki/Multiple-criteria_decision_analysis
11 “Multiple criteria   decision-making techniques and their applications – a review of the literature from 2000 to 2014 ”,
    Abbas Mardani, Journal of Economic Research, vol.28, Sep 2015, pp.516-571.
                                                            56
56

D4.1 First version of the iBorderCtrl software platform


• show the best way forward regarding the weight determination        of the (risk input) options
• prioritise the options
• clarify the differences between the options
The whole MCDAprocedure and specific steps followed is a scientifically proven methodology to facilitate the
decision making and will contribute to minimise threats on the weight calculation for each risk score provided
by each tool.
Identify the options to be appraised
The identified options are all the iBorderCtrl modules which provide risk related information namely:
DAAT, BIO (fingerprints), BIO (palm vein), FMT, HHD, RBAT, ELSI, BCAT and risk indicators. These options are
placed as the first column of the traceability matrix presented in table 9.
The MCDA should be open to the possibility of modifying or adding options as the analysis progresses.
Identify objectives and criteria.
In this step specific criteria are identified for assessing the consequences of each option. Furthermore,   the
criteria are organised in clusters under high-level and lower-level objectives in a hierarchy.
A hierarchical model of objectives and criteria, a value tree, has been developed as shown in Figure 19 below.




                                                         57
57

D4.1 First version of the iBorderCtrl software platform




                                           Figure 19 Value tree


The identified Objectives and respective Criteria are thoroughly explained below:




                                                     58
58

D4.1 First version of the iBorderCtrl software platform




The objectives are placed as the second row of the traceability matrix presented   in Table 8 Performance
matrix.
Table 8 Performance matrix
59

D4.1 First version of the iBorderCtrl software platform




‘Scoring ’. Assess the expected performance of each option against the criteria. Then assess the
value associated with the consequences of each option for each criterion.
A score to each option should be assigned based on the criteria for each objective. Then the consistency of the
scores on each criterion should be checked. For this purpose, a consequence table was created for each
objective by placing the identified options as the first column of the consequence table and the respective
criteria for each objective as the first row. The separate consequence tables for each objective have been filled
(according to the ranges explained in section 3.2.3 above) by the respective partner of each module and
presented in the tables below.
Table 9 Consequence table for “Technology Maturity ” objective




The averaged outcomes of the above table ( green cells ) are placed to the second column of the performance
matrix (Table 8)


Table 10 Consequence table for “Accuracy and reliability ” objective
60

BURDER             D4.1 First version of the iBorderCtrl software platform




The averaged outcomes of the above table (              ) are placed to the third column of the performance
matrix (9)
Table 11 Consequence table for “Performance ” objective




The averaged outcomes of the above table ( grey cells ) are placed to the fourth column of the performance
matrix (Table 8)


Table 12 Consequence table for “Universality ” objective
61

BURDER              D4.1 First version of the iBorderCtrl software platform




The averaged outcomes of the above table ( purple cells ) are placed to the fifth column of the performance
matrix (Table 8)


Table 13 Consequence table for “Phase applied ” objective




The averaged outcomes of the above table ( pink cells ) are placed to the sixth column of the performance
matrix (Table 8)
Assign weights for each of the objectives to reflect their relative importance to the decision.
The weight on a criterion reflects both the range of difference of the options, and how much that difference
matters. So it may well happen that a criterion which is widely seen as ‘very important ’ – say accuracy and
reliability – will have a similar or lower weight than another relatively lower priority criterion – say
universality. Any numbers can be used for the weights as long as their ratios consistently represent the ratios
of the valuation of the differences in preferences between the top and bottom scores (whether 100 and 0 or
other numbers) of the scales which are being weighted.
The proposed weights per objective have been placed on the performance           matrix ( blue cells ). Their sum
equals to 1.
Combine the scores for each option to derive an overall value ( “weight ” of each iBorderCtrl
module).
In order to combine the scores for each option to derive an overall value, an option ’s score on an objective is
multiplied by the importance weight of that objective. The same is applied for all the options and then the
products are summed up to give the overall preference score for that option. The process is repeated for the
remaining options. The identified weights for each tool are calculated, as previously stated, as a weighted
average for each option across the objectives and presented in the two last columns of the performance matrix
( red coloured cells ). In the last column, the resulting weights are normalised so they sum to 1.0 (but
displayed as 100).

                                                      62
62

D4.1 First version of the iBorderCtrl software platform


Examine the results.
The way forward was agreed by all partners based on their recommendations.
Sensitivity analysis.
The next step was to conduct a sensitivity analysis: do other preferences or weights affect the overall ordering
of the options? There was a potentially useful role for sensitivity analysis in helping to resolve disagreements
between interest groups. Subsequently, the advantages and disadvantages of selected options was reviewed
in order to compare pairs of options (create possible new options that might be better than those originally
considered). The above steps were repeated until a ‘requisite ’ model was obtained.


      3.3       System technical description
This section is going to describe how the RBAT engine works (weight-based algorithm, rule authoring
environment, retrieve risk scores form the risk database) and how the communication with the other modules
of the iBorderCtrl system is achieved.

3.3.1 Weight based algorithm for the final risk score calculation
The outcome of the RBAT, namely the overall provided risk score for the preregistration and the overall risk
score (in terms of Admission, Refusal or second line check for the traveller) is determined through a weight ­
based algorithm. This algorithm takes into account both the individual risk scores provided by each respective
iBorderCtrl and the weight of each tool (determined by the previous MCDA technique, section 3.2). Weight
based is a flexible algorithm which can be defined through the user interface. The user is able to define both
the limits of the algorithm (low, medium, high risk) and the objects that participate on it; in this case, all the
iBorderCtrl modules related to the risk assessment procedure and provide feedback to RBAT.
The main purpose of RBAT is to support the final decision of the border guard: admission, second line, refusal
and the assessment of that options by providing an overall risk score and Risk Indicators based on the risk
scores provided by each tool and the rules that the Border Managers are able to author respectively.
RBAT’s weight-based algorithm is structured       to:
• show the decision maker (border guard) the best way forward
• prioritise the incoming risks
• help the key players to understand    the situation better
• improve communication between parts of the iBorderCtrl system
The MCDA procedure presented in section 3.2 for the “weight ” calculation (range: 0-100) of each tool will
contribute to minimising threats on the final risk calculation by RBAT. The identified weights for each tool are
presented in the two last columns of the performance matrix ( red coloured cells ).
Each tool is going to provide their    own risk score. The risk score of each tool for RBAT will be expected in a
range of 0-100 (value 0 represents     100% refusal to pass and value 100 represents 100% admission). In order
to combine the weights and scores      for each tool to derive an overall risk value, a weighted average of the risk
score for each module is calculated     giving the total risk score.
Concluding, RBAT offers two evaluation methods:
        •      Weight Based Algorithm : Calculate the overall risk of each traveller crossing the borders
        based on individual risk scores produced by other iBorderCtrl modules/tools and the weight of each
        tool
        •      Rule Based Evaluation : Produces risk indicators based on previously identified “Risk objects ”
        and traveller ’s data



                                                          63
63

D4.1 First version of the iBorderCtrl software platform


3.3.2   Rule Authoring environment
The Rule Authoring environment is a core functionality of the RBAT module. The Rule Authoring is a point and
click graphical environment which enables the Border Manager to author rules through the use of structured,
non-technical expression of logical interactions between previously identified “Risk objects ” with aim to
produce risk indicators. All the iBorderCtrl database fields are automatically translated into “Risk objects ”
which integrate exchanged information into a unified and familiar view of the underlying message-based
infrastructure. An example of “Risk Objects ” generated based on the iBorderCtrl fields of the Traveller table is
presented in the figure below.




RBAT module enables the Border           Manager to create complex rules and queries using the provided “Risk
objects ”. The rules are structured,    non-technical expressions of logical interactions between “Risk objects ”,
resulting in a specific assessment     (Risk Indicators). Hence, the Border Manager can define complex criteria
based on received information per       traveller, execute "what-if" scenarios and even test new rules and criteria.
A rule expresses a query combining the input data then and expresses the proposed action. The RBAT rule
authoring environment provides a wide array of logical operators (e.g. equal to, not equal, start with, end with,
contained in etc.) and comparisons options. Also it provides the ability to organise the rules into logical groups
for efficient maintenance. An example of a very simple defined rule is presented in the figure below:




                                                          64
64

Zur nächsten Seite