Skip to main content
SearchLoginLogin or Signup

Framework based on parameterized images on ResNet to identify intrusions in smartwatches or other related devices

The continuous appearance and improvement of mobile devices in the form of smartwatches, smartphones and other similar devices has led to a growing and unfair interest in putting their users under the magnifying glass and control of applications.

Published onAug 10, 2021
Framework based on parameterized images on ResNet to identify intrusions in smartwatches or other related devices
·

Copyright 2020-2021 (and successive years)© - All rights reserved- La Biblia de la IA - The Bible of AIISSN 2695-641. License: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)


This English version is a literal translation of the publication in Spanish: 'Framework' basado en imágenes parametrizadas sobre ResNet para identificar intrusiones en 'smartwatches' u otros dispositivos afines. (Un eje singular de la publicación “Estado del arte de la ciencia de datos en el idioma español y su aplicación en el campo de la Inteligencia Artificial") | DOI: 10.21428/39829d0b.981b7276 .


Abstract

A conceptual and algebraically framework that did not exist up until then in its morphology, was developed. What is more, it is pioneer on its implementation in the area of Artificial Intelligence (AI) and it was started up in laboratory, on its structural aspects, as a fully operational model. At qualitative level, its greatest contribution to AI is applying the conversion or transduction of parameters obtained by ternary logic [1] (multi-valued systems) and associating them with an image. This image will be analysed by means of a residual artificial network ResNet34[2], [3] to warn us of an intrusion. The field of application of this framework includes everything from smartwatches, tablets, and PC’s to the home automation based on the KNX standard[4].


This framework proposes for AI, a reverse engineering, in such a way that drawing on common and revisable mathematical principles, which applied in a 2D graphical image to detect intrusion. Reverse engineering will be scrutinised the device’s security and privacy by using artificial intelligence in order to mitigate these lesions for the end users.

Furthermore, the framework is postulated to meet the highest expectations of being an useful tool for society and human being aligned with «Opinion of the European Committee of the Regions – White Paper on Artificial Intelligence — A European approach to excellence and trust».  Besides contributing to avoid the malpractice of generating black boxes [5] in the use of artificial intelligence[6], making it something incomprehensible.

Confidence and clarification in technology is a short-, medium- and long-term goal to society do not be affected by the spurious technologies, which prejudice a good social development. «These technologies have extended the opportunities of freedom expression and social, civic and political mobilization. At the same time they arouse serious concerns»[7].


I. Framework

1.- Need to know

There is a widespread availability of «intelligent» devices that all we have our disposal, from digital cameras, tablets to smartwatches. The problem occurs when the lack of knowledge of these electronic devices management extends to a large percentage of the population[8], thus ignoring the risks which might be entailed. Although it is true that an electronic device is intelligent because it interacts appropriately and autonomously, we cannot forget this theory: If something brings benefits and solutions in specific situations, it means something should know in advance what is occurring. It all comes down to a simple word: data.

One of the most innovative electronic devices is smartwatch. Smartwatches have sensors which identify models or human behaviour patterns based on automatic learning techniques, the Bayes theorem, processing data and the method of K Nearest Neighbours [9]. These procedures raise a large volume of information with which we intent to meet the goal by specifying the results that are expected. These sensors are very useful for monitoring human activity: walking, cycling, running and walking up and down stairs [10].

The use of smartwatches may represent a serious threat to the safety of children and adolescents [11]. The most important safety failures of low-cost smartwatches happen in applications and in connections with servers which store data. The most popular brands of smartwatches used by minors are Carl Kids Watch, hellOO! Children’s Smart Watch, SMA-WATCH-M2 y GATOR Watch. The highest recurring problems we face are certain failures in the implementation of certifies for secure connections HTTPS [12] and information related to unencrypted electronic register [13]. Nevertheless, devices from popular manufacturers like Nokia [14], Samsung and Huawei[15], these kind of problems are not frequent because they use encrypted connections.

In that regard, it is important to note the conditions that affect to the safety of smartwatches’ brands. In this way, when buying a smartwatch this information will be useful. Mainly, the criterion for choosing brands is knowing the number of sales: Samsung, Apple, Fitbit and Garmin.

Image 1

Image credit: Security in Samsung and Apple devices by Adrián Hernández, 2021, Mangosta (https://mangosta.org/seguridad-en-los-dispositivos-samsung-y-apple/)

Image 2

Image credit: Security in Fitbit and Garmin devices by Adrián Hernández, 2021, Mangosta (https://mangosta.org/seguridad-en-dispositivos-fitbit-y-garmin/)

In data collecting, Samsung is the only brand which does not compile online information from children under 13 years [16]. However, it does not occur with Apple [17], Fitbit [18] and Garmin [19], that make no distinction between ages. Additionally, all brands chosen for our study, they share information with a third party to analyse metrics and compare results.  According to these references, Samsung is the only company in which user’s data are yielded, rented, or sold. On the contrary, Apple is the unique company that does not process data for advertising or commercial research purposes. For this reason, the conclusion we get is that all brands compile non-anonym data but with differences.  

2.- Algebraic description and framework calculation[20]

The systematic is based on duality or paring of parameters which are determinants of an intrusion & associated image. The question of what parameters are to be defined, it will be result of the experience in the area of cybersecurity[21] , which we will propose later, and they will be exposed.

3.1.-Algebraic description

 Given a specific number of intrusion parameters named «n», that is, a numerical succession of the terms N\in \Nan=b2/b3\land \hspace{0.1cm} a_n=b^2 \hspace{0.1cm} /\hspace{0.1cm} b \geq3, characteristic for each instance of the framework to determine; given (Px\vec P_x), composed by a group of terms or axial elements with vectors that reports to «n» →(Px(1....n)\vec P_{x(1....n)}) whose module |Px(1...n)\vec P_{x(1...n)}| only takes ternary values of the closed value [0,1, U], and a maximum number of vectors of (Px\vec P_x) called «m» Z\in \Z[P1,P2,P3...Pm]0f:nm [ \vec P_{1}, \vec P_{2},\vec P_{3}...\vec P_{m}] _0^{f: {n→m}} and determine its value by m=3nm=3^n; and given a continue numerical succession of pairs of ordered pairs (x(1…n), yi) which bijectively applies that an element in the group (Img;y\vec Img; y), / teˊrminoxy=xm \forall \hspace{0.1cm} término \hspace{0.1cm} x →\hspace{0.1cm}y=x \hspace{0.1cm} _\leq m i=(n1,n)\wedge \hspace{0.1cm} i=(n-1, \hspace{0.1cm} n), we define this framework as:

An intrusión Vector System → SVintrusioˊn=[Px(1...n),Imgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}. that complies with an associated image, or transduced, is function of a parameters array (p) →{p1, p2, p3, p4… pn}; vectored each umpteenth set of these one as (Px\vec P_x) / : Imgyi=f(Px(1...n))03(n)\vec Img _{y_i}={f (\vec P{_{x(1...n)} })} _0^{3^{(n)}}, and for each of the vectors of the set (Px\vec P_x) , it has a bijective relation[22] with its reflected image (Imgy\vec Img_y).

Thus, (Imgy)(\vec Img_y) is the consequence of the lineal transformation of the vectors system (Px\vec P_x), which is defined by the intrusion parameters array (p). This array is dynamically built xm \forall \hspace{0.1cm}x \hspace{0.1cm} \leq m by this expression:

Imgyi[Px2Px1,Px3Px2,Px4Px3,...,PxnPx(n1),Px1Pxn]nN. \hspace{0.1cm}\vec Img{_y} _i \Leftrightarrow [\vec P_{x2} - \vec P_{x1}, \hspace{0.1cm}\vec P_{x3}-\vec P_{x2}, \hspace{0.1cm}\vec P_{x4} - \vec P_{x3},...,\vec P_{xn} - \vec P_{x({n-1})}, \vec P_{x1}-\vec P_{xn}] \hspace{0.1cm} \forall \hspace{0.1cm}n \hspace{0.1cm} \in \hspace{0.1cm} \mathbb N .

  • The closed polygonal image, that will associate a specific array of intrusive parameters for being valuating by Artificial Intelligence (AI), it will be defined by this vectorial expression:

(Σj=1j=n(PxnPx(n1)))(Px1Pxn)=0(\hspace{0.1cm} \Sigma _{j=1} ^{j=n}\hspace{0.1cm} (\vec Px_n-\vec Px_{(n-1)}))-(\vec Px_1-\vec Px_n)=0

A) FRAMEWORK’S DOMAINS

1) Domain {D1} Px(1...n)N:xn[0,1,U]. \forall \hspace{0.1cm} \vec P _{x(1...n)} \in \hspace {0.1cm} \vec \mathbb N \hspace {0.1cm} \land: xn \in [0,1,U].

2) Domain {D2} de Imgyi=f(Px(1...n))03(n)R2 f:NR2. \vec Img _{y_i}={f (\vec P{_{x(1...n)} })} _0^{3^{(n)}} \in \hspace{0.1cm} \vec \mathbb R^2 → \ f: \mathbb N→ \mathbb R^2 .

3) Domain {D3} del grupo (Px\vec P_x) Nn \in \vec \mathbb N^n .

(Px)(P1,P2,P3,...Pm)/D1[0,1,U]:Px(1...n)NPxNn(\vec Px) \Leftrightarrow (\vec P_1, \vec P_2, \vec P_3,...\vec P_m) \hspace{0.1cm} / \hspace{0.1cm} {D _1\hspace{0.1cm}[0,1,U]:\hspace{0.1cm} \forall \hspace{0.1cm}\vec P_{x(1...n)} \in \hspace{0.1cm} \vec \mathbb N} \land \forall \hspace{0.1cm}\vec P_x \hspace{0.1cm} \in \vec \mathbb N^n

4) Domain {D4}(Imgy)Rn(\vec Img_y) \in \vec \mathbb R^n.

B) POLAR COORDINATES OF THE FRAMEWORK

The previous lineal transformation of (Imgy)(\vec Img_y) would be a closed, polygonal and vectorial line that in polar coordinates, easy to implement by using radial charts, it implies for Px(1...n)\vec P _{x(1...n)} a vector generator system r=modα\vec r= |mod|_\alpha of modular value (0,1,U) and the angle by α. The value of the angle will be between 0º y 360º, depending on the number of parameters used in the intrusion scanning and associated to its axes. In the case of n=9 parameters and for a specific instance «x», [Px1,Px2,Px3...Px9] [ \vec P_{x1}, \vec P_{x2},\vec P_{x3}...\vec P_{x9}], α\alpha will be assigned a value (or divergence between the vectors system) (Px(1...9)α\vec P_{x (1...9)}\alpha) of 40º → =360º9=\frac{360º}{9}.

2.1.1- Nature of the intrusions Vectorial System’s dimensions SVintrusioˊn=[Px(1...n),Imgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}

As initial proposal of representation, radial graphics to generate images and using these axes as vector radios of a circumference in the plane, separated an angle α\alphaα. Among them, the most interesting part for us, the graphic, which is completely located in the lineal application f:NR2f: \mathbb N→ \mathbb R^2 .

However, with regard to the coordinates of each vector and its algebraic nature, the real dimension would be:

1) Group of vectors Px(1...n)\vec P x _{(1...n)} {Dim1}N\N=1.

2) Group of vectors Imgyi \vec Img{_y} _i {Dim2}R\R=2.

3) Group of vectors Px\vec P_x {Dim3}N\N=n.

4) Group of vectors Imgy \vec Img_y {Dim4}R\R=n.

2.1.2.- Algebraic properties of Px(1...n),Imgyi\vec P x _{(1...n)}, \vec Img_{y_{i}} that conform SVintrusioˊn=[Px(1...n),Imgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}

The properties to be analysed for this vectorial system range from: neutral element [23], symmetric element [24], associative [25], commutative [26] and distributive property [27], going through the analysis of Px(1...n),Imgyi\vec P x _{(1...n)}, \vec Img_{y_{i}} as an abelian group [28], ring [29], body [30] and vectorial space[31]… Its development, testing and demonstration are proposed for the next version of this publication. Moreover, it is omitted in this review for getting a general conceptualization of the framework as well as its development in laboratory.

2.1.3- Inferences of the algebraic definition

This mathematical model can be applied the theory of Lineal Algebraic Field[32]. Similarly to radial graphics, that enclose areas in its irregular polygons, defined by its vertexes. They can be algebraically operating by applying the Gauss method or the Gauss area calculation[33] and its attached properties. Also, properties of the suitable ternary logic and combinational algebra of Boole [34], among others… This is why the number of theorems and corollaries [35] for this framework, its application in AI and detection of intrusions, they are topics of study, wider set of possibilities and rough in different applications described hereby.

Image 3

Image credit: Example formula of Gauss area calculation, Wikipedia, Isalar derivative work: Nat2 (talk) - Polygon_area_formula.jpg, Mangosta (https://mangosta.org/formula-del-area-de-gauss/)

2.2.-Calculation and systematic use

The maximum number «m» of possible vectors with the sampling of n=9 parameters and ternary logic (0,1,U) is → m=39;Δr(n=9)=19.683\ m=3^9;→ \Delta \vec r_{(n=9)}=19.683 vectors of intrusion. They will be increased as number of parameters «n» of 16, a 25, 36, etc., raise. It follows a logic of square matrices and its own numeric succession, previously defined. For a number n ≤9 does not emerge justified enough AI use because any programming language by means of an algorithm or specific library, it could do it in a trivial way. However, n>9 is justified because of the computing power and the speed in the management of massive data the graphic systems associated to artificial intelligence [36]. According to the current trends of services, servers, and graphic cards providers, such as, Google or IBM, among others. This capacity of AI is differentiating in values of «n» superior to 25 because it would represent →  m=325;Δr(n=25)=19.68347.288.609.443\ m=3^{25}; → \Delta \vec r_{(n=25)}=19.68347.288.609.443 vectors, that is, almost one trillion of vectors which would give a good precision in its estimates of intrusion. To get these parameters, we capture the rules in logs of IDs’s [37] and the use of known techniques about monitoring systems (apps, etc). It is flexible and open to other collection methodology and the framework could admit without further problems.  

The intrusion parameters serves as a basis of design a graphical model of positives, negatives, and training, especially, separated, ordinated in contents units, folders, or directories. In each new identity of parameters and images, when the image has been transferred to the assigned content unity, artificial intelligence (previously trained) tells us if there is or not an intrusion. For the final decision, whether it is an intrusion? This framework proposes adding the possibility of a human being evaluates the answer emitted by artificial intelligence. On the one hand, artificial intelligence warns and scans all intrusions via SVintrusioˊn=[Px(1...n),Imgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}} to a technologist finally decides. On the other hand, the management of a relational database about declared positives and annexes to the system which integrates or encapsulates the framework. That, along with the performance of AI for those positives that are not yet, it will let generating an improved knowledge database that shall be increased based on the number of users’ petitions.

3.- Singular definitions and an example

3.1.-Definition in Object-oriented Programming (OOP)[38]

Against this background, we can abstract the idea which defines a class called “Intrusion”, and the legacy of this class in other, defined by 9 parameters, which in its turn generates the class called ‘Intrusiónbase9’. ‘Intrusiónbase9’ is feasible to implement with any programming language. For each instance of this class, we will get an object of SVintrusioˊn=[Px(1...n),Imgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}} for its subsequent classification as intrusion [True/False].

3.2 Definition of 9 parameters

These 9 parameters mean an instance of the general framework, previously described. For this reason, this one inherits all features of it and where the values (0,1,U) specify: switch off or not used, switch on or in use and indetermined, respectively.

Table 1

Num. Parameter (n=9) associated to Px(1...n) \vec P x _{(1...n)}

Definition of parameter

Recomended procedure

n=1→Px1

Unknown or unauthorized web destination (url)

Snort Rule[39] IDS or similar

n=2→Px2

Unencrypted communication (HTTPS)

“ “

n=3→Px3

Local data transfer to other storage

“ “

n=4→Px4

Use or connection to BSSID [40] not allowed or unknown

Kismet rule [41] -WIDS o similar

n=5→Px5

Use or connection to Bluetooth port[42] not allowed or unauthorized

“ “

n=6→Px6

Use or connection to GPS[43] not allowed or unauthorized

“ “

n=7→Px7

Use or camera connection not allowed or unknown

Monitor system or similar

n=8→Px8

Use of audio device (microphone / speaker) not allowed or unknown

“ “

n=9→Px9

Monitoring of physical parameters (speed, temperature, battery) with unusual result

“ “

3.3.- An application example

In general, it occurs Imgyi=f(Px(1...n))03(n)\vec Img _{y_i}={f (\vec P{_{x(1...n)} })} _0^{3^{(n)}} , we see an example with 9 parameters of an imagen in an determined instant T1 for a moment x=1;Img1i=f(P1(1...9))x=1;→ \vec Img 1{i}={f (\vec P{_{1 \hspace{0.1cm} (1...9)} })}Img1i=f(P11,P12,P13,P14,P15,P16,P17,P18,P19)\vec Img 1_i= f (\vec P1_1, \vec P1_2, \vec P1_3, \vec P1_4, \vec P1_5, \vec P1_6, \vec P1_7, \vec P1_8, \vec P1_9). The graphic image to be scrutined by artificial intelligence, it will be created from this lineal transformation:

Img1i[P12P11,P13P12,P14P13,...,P19P18,P11P19]\vec Img1{_i} \Leftrightarrow [\vec P1_2- \vec P1_1, \hspace{0.1cm}\vec P1_3-\vec P1_2, \hspace{0.1cm}\vec P1_4 -\vec P1_3,...,\vec P1_9- \vec P1_8, \vec P1_1- \vec P1_9]

The partial vectors (Img1i\vec Img 1_{i}) would be dynamically built in clockwise direction from vectors P1(1...9)\vec P1_{(1...9)}, with a lag of 40º between them.

Img11=P12P11;Img12=P13P12;Img13=P14P13;Img14=P15P14;Img15=P16P15;Img16=P17P16;Img17=P18P17;Img18=P19P18;Img19=P11P19\vec Img1{_1}= \vec P1_2- \vec P1_1;\hspace {0.1cm} \vec Img1{_2}= \vec P1_3- \vec P1_2;\hspace {0.1cm} \vec Img1{_3}= \vec P1_4- \vec P1_3;\hspace {0.1cm} \vec Img1{_4}= \vec P1_5- \vec P1_4;\hspace {0.1cm} \vec Img1{_5}= \vec P1_6- \vec P1_5;\hspace {0.1cm} \vec Img1{_6}= \vec P1_7- \vec P1_6;\hspace {0.1cm} \vec Img1{_7}= \vec P1_8- \vec P1_7;\hspace {0.1cm}\vec Img1{_8}= \vec P1_9- \vec P1_8;\hspace {0.1cm} \vec Img1{_9}= \vec P1_1- \vec P1_9

By assigning values, previously scanned, to the parameters array (p1)=(0,1,U,0,1,1,0,U.0), these will define, intrinsically, (P1(1...9))( \vec P1_{(1...9)} ), and its associated image Img1i\vec Img1{_i} it would be built as the figure below. The graphic level has been assigned the value «3», indeterminations «U». Y «1» ó «2», which correspond to the binary values (0,1), respectively.

Image 4

Image credit: Example of the capture of parameters by Juan Antonio Lloret Egea, 2021, Mangosta (https://mangosta.org/i_parametros-3/)

II.-Laboratory of the framework

4.-Implementation of the 9 parameters

4.1.-Unknown or unauthorized url destination

log tcp $HOME_NET any -> !DireccionIPConfianza any (sid: 1000001; rev: 001;)

log tcp any any -> !DireccionIPConfianza any (sid: 1000001; rev: 001;)

The rule can be defined in two ways. The first one we use the variable “HOME_NET”, previously defined, in which we specify what is our network. The second one, we do not specify the network that we are using but we define the rule with the command “any”. This, despite of being less efficient in terms of resources, we assure that it works. This rule can be read as: “Create a log when TCP protocol is used from our network to send packages from any port to a different address from the trusted address”.

4.2.-Unencrypted communication (HTTPS)

log tcp $HOME_NET any -> any any (content:"http"; sid: 1000002; rev: 001; )

log tcp any any -> any any (content:"http"; sid: 1000002; rev: 001; )

The rule can be defined in two ways. The first one we use the variable “HOME_NET”, previously defined, in which we specify what is our network. The second one, we do not specify the network that we are using but we define with the command “any”. This, despite of being less efficient in terms of resources, we assure that it works. This rule can be read as: “Create a log when TCP protocol to send packages from any port to any address and port with HTTP content”.

4.3.-Local data transfer to other storage

log tcp $HOME_NET !puerto_confianza -> any any(sid: 1000003; rev: 001;)

log tcp any !puerto_confianza -> any any(sid: 1000003; rev: 001;)

This rule can be read as: “Create a log when the TCP protocol is used to send packages from any port, which is not the confidence one, to any address and port.”

4.4-Use or connection to BSSID not allowed or unknown

For Windows devices, Kismet can be used or integrated in Python by using the subprocess module (Windows 7 and above) that lets us to get the state of the devices process and the answer that the operating system gives us after entering a sentence in the console. To know the BSSID to which the device is connected we use: “netsh wlan show interfaces”, which will give us, between many data, the BSSID to which the computer is connected. If this BSSID is not a known BSSID, we will get “1” as a value. This will be used to design the graphic which Artificial Intelligence use to determine if we are victims of an intrusion or not. If the BSSID is known, we will store “0” as a value. On the contrary, if it is not possible, we will have an undefined value in this parameter.

For Android, it can be used WifiInfo. That is a defined class by Google used to manage connections of the device [44] and it has as an inherit the class Object [45]. Furthermore, the method getBSSID () will be used to get the BSSID by a MAC address of 6 bytes: XX: XX: XX: XX: XX: XX. Again, if the BBSID is known, it will be stored as a “0” value, if “no”, it will store as “1” value. Moreover, if it is not possible to define the state of devices connection, it will be defined as “indetermined” value.

4.5.-Use or connection to BSSID not allowed or unknown

For Windows devices, Kismet can be used or integrated in Python by using PyBluez library, which lets us to get the state of Bluetooth connection.

For Android, BluetoothProfile interface can be used (a collection of methods and constants), defined by Google to manage Bluetooth connections in Android devices. Moreover, we need to use another interface, ServiceListener, that lets us to list, easily, the (constant) states that we look for:

STATE_CONNECTING: Device in connection status. STATE_DISCONNECTING: Device in disconnection status. STATE_CONNECTED: Connected device.
STATE_DISCONNECTED: Disconnected device.

These states can be seen similar but the difference is in the first ones we will get the state of our Bluetooth connection, that means, if we have activated or not Bluetooth. As against that, in the last two ones we will get information about our connection is active or not. However, a device can be active the Bluetooth connection but it not be connected to another device [46].

4.6.-Use o connection to GPS not allowed or unauthorized

For Windows devices, Kismet can be used or integrated in Python by using the gpds library, which lets us to get the GPS position when executing the script. If it does not exist a connection, we will not get any position and we will storage “0” as a value. On the contrary, if we get any position, we will store “1” as a value.

For Android, GnssStatus.Callback can be used, a defined class by Google to manage the navigation global system by satellite in Android devices. Another possibility is using GpsStatus tool but it cannot monitory “GLONASS”, “GALILEO”, “BEIDOU”. In addition, we use the defined method onStarted() that warns us when GNSS is activate and onStopped(), its analogous when the process is stopped [47].

4.7.-Use or camera connection not allowed or unknown

For Windows devices, the library cv2 in Python is used. It allows us to have control about video devices, in this case, to test if the camera is activate or not when the file is executing. If the camera is online for the test environment, it will appear this message:

Image 5

Image credit: Console output online camera status

by Adrián Hernández, 2021, Mangosta (https://mangosta.org/i_parametros-2/)

Additionally, we will store “1” as a value in the variable defined to do the graphic with the rest of the parameters. If, instead, the camera is offline, it will appear this message in the test environment:

Image 6

Image credit: Console output offline camera status

bu Adrián Hernández, 2021, Mangosta (https://mangosta.org/ii_parametros-2/)

In this case, to do the graphic, it will be stored “0” as a value.

For Android devices, android.hardware.camera2 [48] is used and the callback of Android: CameraDevice.StateCallback [49], that includes the method onOpened(CameraDevice camera), and it brings us back if the camera is open or not.

4.8.-Use of audio device (microphone / speaker) not allowed or unknown

For Windows devices, Python and the library Pyaudio are used. This library lets us to capture an audio signal using micro and lately, it can visualise its temporal representation [50]. The aim is capturing the audio wave in the time that we consider. If that line is not flat, the micro may be active.

Image 7

Image credit: Microphone status by uses audio waves

by Adrián Hernández, 2021, Mangosta (https://mangosta.org/iii_parametros-2/)

For Android, AudioManager can be used. That is a defined class by Google to manage the microphone in Android devices [51]. As well, it is necessary to use Context class [52] and the String AUDIO_SERVICE. Furthermore, we can test from the API 11 of Android (correspondent to HoneyComb – Android 3.O.x) if  the micro is online, using the defined constant: MODE_IN_COMMUNICATION / MODE_IN_CALL.

The followed steps to create this structure are:

1.- Using getSystemService(java.lang.String)

2.- Including Context class and the String AUDIO_SERVICE: getSystemService(Context.AUDIO_SERVICE)

3.- Including the created sentence in the step 2 in AudioManager class: (AudioManager)context.getSystemService(Context.AUDIO_SERVICE).

4.- Currently the getMode method associated to AudioManager it offers us 5 results:

MODE_NORMAL: There are no calls or actions set.
MODE_RINGTONE: There is a request to the microphone.
MODE_CALL_SCREENING: There is a call connected but the audio is not in use.
MODE_IN_CALL: There is a phone call.
MODE_IN_COMMUNICATION: There is an application which is making audio/video or VoIP communications.

Definitely, if the method brings us a “MODE_IN_CALL” or “MODE_IN_COMMUNICATION”, we will have the microphone activate.

4.9.-Monitoring of physical parameters (speed, temperature, battery) with unusual results

For Windows devices, Python and the psutil library will be used. This library lets us monitoring and recovering information about the system such as CPU, RAM, disk use, network or battery. Furthermore, it is a multiplatform which allows us to integrate it on any operating system in the future [50].

It can be generated a log by using psutil and establishing it by using the use percentage and the sustained period. This percentage would be considered as an anormal use of physical parameters: speed, temperature, and battery. Moreover, if we want to monitor what is happening with CPU and RAM in real time, we have to use the Matplotlib library. This library is used for generating graphics based on data contented in lists or arrays.

Image 8

Image credit: Monitoring of physical parameters by Adrián Hernández, 2021, Mangosta (https://mangosta.org/iv_parametros-2/)

4.10.-Graphs obtained from the parameters analysis

The polar graphs that we get to control each parameter, they allow to Artificial Intelligence classifies images for predicting if there is an intrusion or not. They follow this reasoning:

Image 9

Image credit: Polar graphics by Adrián Hernández, 2021, Mangosta (https://mangosta.org/vi_parametros-2/)

5.-Diagramming and systematic work in the implementation of the framework

For the design, compilation, and execution of the framework in laboratory, we consider different possibilities and the use of programming languages. On the one hand, the own use of AI about its training and classification of images that we call systematic work (S1) and in which Python is used. On the other hand, the design of a database storage and data use, both for known defined devices (if they are positive in intrusion or not) and for those devices to define. For this systematic work (S2) it is used SQL. And, finally, it is therefore necessary the own management of the shell and its general assembled. For this systematic work (S3) it is used Java. (SQL language is singular in the work systematic S2. For the systematic works S1 and S3, it can be used one programming language to choose between Python and Java. Each of them presents advantages and disadvantages, with respect to the use of AI and the objective operating systems in smart devices.)

We will also raise the flow diagrams of the logic to be used, general and singular. Adding a complementary element as the step by step of the use of AI, the design of database and finally, the essential codification about different programming languages that we used: SQL, Python and Java.

5.1.-General diagram of the framework

In this section, we address the “skeleton” of the document. So, we present the general diagram which is divided in three parts: the part related to user, the database and AI. The main objective is getting an overview of the procedures for understanding all the methodology.

Image 10

Image credit: Diagram of general flow of the framework by Diana Díaz, 2021, Mangosta (https://mangosta.org/diagramacion-general-framework/)

The general diagram has been designed by using an open-source diagramming application called Drawio [53]. This diagram, especially, contains two more diagrams: Java connection with database and the use of fast.ai + Google Colaboratory, which also appears at the bottom.

Firstly, we start with the installation of the application. By signing to the app, the user will be asked if he wants that his device will be analysed. If the answer is yes, the process continues, if not, the app closes. If the process continues, the user has to enter the required data, that is, type of device, brand and model. At the same time, it will be connected Java with the database to validate the information. As a consequence, it will test if the device is registered, providing of enough information to do the analysis. If it is registered, it will be verified with the historic the possible actions which the device could be making without user’s consent. On the contrary, if the device does not exist in the database, it requires the user’s authorization to complete the process. In the case that the user does not agree with this analysis, the application closes. Furthermore, artificial intelligence is used to test if there is a risk of intrusion. To carry out this process, we do a capture of 9 predefined parameters, which assign a ResNet34 pretrained model with fast.ai, indicating if there is an intrusion. With the result of AI, a professional will analyse the obtained data and he will decide if the results are consistent. If yes, these data will be stored in the database and the results will be communicated to the user, and if not, these data will be discarded.

5.2.-Java diagramming

The Java diagrams which is defined as systematic work (S3). The main objective of this diagramming is defining, on the one hand, how Java accesses to database by using driver JDBC. On the other hand, how it does a checking with a SQL query.

Image 11

Image credit: Java diagramming by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/diagramacion-de-java/)

JDBC driver (Java Database Connectivity) facilitates the connection between a database management system and any software that works with JAVA [54]. Apart from JDBC driver, we use the database management system MySQL to manage files of the database and phpMyAdmin tools, which serves to control the administration of MySQL and it is available in XAMPP. We also work in the environment of Java with Eclipse. Eclipse is a development environment software integrated to connect the database to Java [55]. It is important to add that in the design of the Java diagram, we use the program Drawio as a tool, as in the rest of diagrams (S1, S2 and S3).

In next section, we explain how it makes the connection between the database and Java, both in local and the “cloud” by using the Eclipse virtual box. When all applicatives are installed, JDBC driver is included in our Eclipse project. As a general recommendation, it is convenient creating, previously, the database, trying to avoid errors.

Image 12

Image credit: Connection to database - Java by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/conexion-a-la-base-de-datos-java/)

Next step: defining the object “connection” in Eclipse for establishing the connection and entering the path of the server with the database which we want to get access. We name the database “security”, both in local and the “cloud”. Moreover, it is necessary to specify the user and password to get access to database. Lastly, we look for the host and port in phpMyAdmin to include them in the path, by this way making the connection from the “cloud”.

Image 13

Access to code for professors and students: https://mangosta.org/conexion-a-la-base-de-datos-java001/

A connection, in local or in the “cloud”, is independent each other, that means, each one is maked in a different class. The only difference is that each class in Eclipse is the path.

Image 14

Image credit: Connection to the database - Java by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/conexion-a-la-base-de-datos-java002/)

When the connection to the database is tested, we make queries as much as we like, using SQL language. For instance, we execute a consultation by using the objects “Statement” and “Resulset”. As a result, they will be displayed in a window the table and register. That is because they have been called from Eclipse.

This section will be more detailed in Java codification.

5.3.-Python diagram

Python diagram is split into two. On the one hand, it makes reference to the connection of the database to Python, on the other hand, it is based on the use of artificial intelligence in Python.

5.3.1.-Database connection diagram with Python

This diagramming details the database connection by using Python. This is defined as systematic work (S1) in the platform of Google Colaboratory.

Image 15

Image credit: Python diagramming by Diana Díaz, 2021, Mangosta (https://mangosta.org/diagramacion-conexion-base-de-datos-con-python/)

The tools for this procedure are Google Colaboratory, a Google application to create virtual box environment based on Jupyter Notebook [56]. Jupyter is a file of extension IPYNB that combines many cells with Markdown code or text [57]. The MySQL Connector/Python connector is written in Python, and it allows to connect us to an external dataset, however, it is necessary a means of interaction. For this reason, we use the package PyMySQL [58].

Next step is connecting the database to Python. Firstly, we access to Google Colab and we create a new notebook, which automatically becomes in a Jupyter notebook. Once it is opened, we install the MySQL Connector/Python connecter with “pip”, a package management system written in Python [59]. Once it is installed, we import it. Similarly, PyMySQL package should be installed. At this point, if the process does not give us an error, we will have all that needed to the database connection from our notebook of Google Colab. To connect us, we use a variable in which by means of the package PyMySQL, we store the information of the path: name, user, password, and the name of database which we want to connect to. Then, we turn the variable in a cursor because with this, we make a search in database. With the cursor, we can execute over it the query that we want. Subsequently, it is possible to print it with the obtained result. Once the queries are executed, we close the connection to database.

This section will be more detailed in Python codification.

5.3.2-Diagramming of the use of Artificial Intelligence in Python

This diagram, defined as systematic work (S3), explains how intrusions are detected. The objective is studying intrusions by using images focalised on the IDS. Particularly, we have chosen images of tigers and cats. Similarly, in the next section are detailed which steps are required for detecting intrusions by using ResNet34, Jupyter notebooks and Google Colaboratory.

Image 16

Image credit: Diagramming of the use of Artificial Intelligence in Python by Adrián Hernández, 2021, Mangosta (https://mangosta.org/diagramacion-uso-de-la-inteligencia-artificial-en-python/)

To fully achieve this task, we use Drawio  (for the design of diagrams), Google Drive and Google Colaboratory, which includes the programming language Python. Apart from these tools, we work with fast.ai, that is, a library of deep learning that allows the inputting of pretrained models such as ResNet34 for the subsequent classification of the images [60]. Therefore, it is not necessary downloading any kind of software because all the processes will be carried out in the “cloud”.

Thus, we enter to Google Drive to create three folders: “train”, “validation” and “test”. Within those folders, we create two subfolders defined as “cat” and “tiger” in which the images will be grouped.

Image 17

Image credit: Storage of cat images by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/gatos/)

Image 18

Image credit: Storage of the tigers images by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/tigres/)

Subsequently, we enter to Google Colaboratory, we indicate the path where our dataset is located in the first cell. By this way, we train the model with these images. Consequently, we create the pretrained model with ResNet34 architecture. When we say pretrained model, we refer to a specific model which allows that other models get advanced results without providing of huge amounts of computation, time, and patience [61]. It is a way of knowing an appropriate technique of training. Furthermore, if we want to use the model in the future, it would not be trained another time. ResNet34 (residual neural network) is a pretrained model of ImageNet. ImageNet is a recognising system of visual objectives [62]. Both help us to improve the performance and optimize the results, concerning to intrusion detections.

The next step consists of conducting a training cycle with a defined number of epochs to train it (see below in the process of detecting intrusions with artificial intelligence). In this way Artificial Intelligence can study all sets of images. To interpret the results, we create a confusion matrix which helps us to evaluate the performance of the images classification model. This matrix compares the real values with the predicted ones. By this way, we will see how the images classification model is working. In fact, in the following graphic representation, we check that artificial intelligence only mistakes in an image: it classifies that image as a tiger when it is a cat.

Image 19

Image credit: Answer of Artificial Intelligence by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/i_respuesta-inteligencia-artificial/)

Image 20

Image credit: Answer of Artificial Intelligence (confusion matrix) by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/ii_respuesta-inteligencia-artificial/)

6.-Step by step in the process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks, and Google Colaboratory

Google Colab is a working environment that allows us executing Jupyter notebooks in a web browser. Among their many advantages, not previously configuration and free access to GPU’s (Tesla T4 or Tesla K80). Therefore, it offers the possibility of training and using artificial intelligence without subscriptions and not using our own hardware resources.

A Jupyter notebook is a document that executes cells of “live” code, plain text, equations and so on. The structured which follows is an ordinated list of input/output. The name of Jupyter has its origins from: Julia + Python + R, which are the three programming languages included in Jupyter notebooks. The main components of Jupyter Notebook are a set of areas (interpreter) and the dashboard. Similarly, changing the area of the notebook, we can execute other languages from Google Colab, for instance, Java.

Once we know enough about the components that will give form to our artificial intelligence, we explain the procedure:

The first step is preparing the environment of Google Colab, if not, we can not access to it without activating it. To do this, we click on Google Colaboratory from Google Drive.  

Image 21

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial/)

If the option does not appear, we have to click on this url: https://colab.research.google.com/

Once we access to Google Colab, we have available the predetermined notebook by Google to initiate us in Python from the “cloud”. However, we can also create a white notebook to include tests. To do it, we click on the option: “new notebook”.

Image 22

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial001/)

The new created notebook has the following structure:

Image 23

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial002/)

Previously, the cell that appears it allows us to write code in Python and execute it by using the button “play”, which is located in the same cell.

Image 24

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial003/)

Google Colaboratory is not configured for using GPU’S, so we have to prepare it. To do it, it is necessary to click on these options: “execution environment” and “changing the type of execution environment”.

Image 25

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial004/)

Then, clicking on the drop-down menu with the word “none” and replace it by “GPU”.

Image 26

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial005/)

At this point, Google will assign a GPU to the Jupyter notebook, which it will be tested by using the following instruction in a code cell:

Image 27

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial006/)

As we see in the image, a Tesla T4 is assigned, although it could be a Tesla K80. The reason is why because of the two GPU’s that Google offers free of charges. Now we have set up the environment of Google Colab for working with artificial intelligence. However, we need to create the structure of folders that we are going to use in order to train our model. This procedure will be realised from Google Drive for reducing its complexity, but it could be realised in a local host, setting up the environment of Colab.

To carry out the first tests, we use images of cats and tigers, respectively. The main aim is that from a given image, which AI has never seen them before, it will be able to differentiate between cats and tigers. The initial structure has three folders, which we named them as we want, but for convenience, it is recommendable defined as:

Image 28

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial007/)

A folder defined as Test, attributed to the images that artificial intelligence has never seen them before, will used to test that AI can classify graphical representations correctly. Another folder defined as Train, for purposes of training our model. In this case, there will be a differentiation between tigers and cats. Moreover, artificial intelligence will know if it is visualizing a cat or tiger. A folder defined as Validation, which serves us to validate our model. It will be a distinction inside it. This folder lets us to know how artificial intelligence will classify the images. Furthermore, inside the folders Train and Validation, we have this structure:

Image 29

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial008/)

Inside each folder, there are images of cats and tigers. The images of each folder should be different. Moreover, the higher number of figures we use, the greater precision for artificial intelligence to classify other graphical representations.

When you have created the structure of folders and the environment of Google Colab is ready, we start working with artificial intelligence. To do this, we execute the referred code to connect our environment of Google Colab with Google Drive, where the folders, which AI will use from the created notebook, are located.

When both environments are associated, we define the paths, for instance, root_dir and base_dir.

Similarly, we define the path, that means, the location of the main directory that we use.

To train the artificial intelligence model is needed to import fast.ai. ResNet34 refers to a residual network that was introduced by Microsoft in 2015. Once we use fast.ai, we import the library:

These instructions serve to import the fast.ai libraries and vision fast.ai as well as “error_rate” of fast.ai metrics. “Error_rate” is used to determine the grate of error in our trained model. In other words, it is used to know, according to our criteria, if we have to improve the model or on contrary, the obtained error is acceptable and the model is apt to use it.

When the libraries are imported, we should test that the setting up of the notebook is correct. To do this, we print on the screen an aleatory image by using the defined path and with these instructions: open-image (the path of the image by adding the variable path) e img.show().

Image 34

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial013/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificialc005/

When we see evidences that the image is displayed correctly, we create the data (exhibition of images) that our model is going to use. To start, we define the batch size, which allows us to indicate the number of images that we transit, at the same time, to the memory. As a preventive measure, it is recommendable not using a very high batch size, otherwise, we may create a memory error in the graphic card and consequently the stopping of the process. To define the size of the batch we write this code:

In this case, the number 6 indicates the amount of images which are transited, at the same time, to the memory. Once we have assigned the batch size, we define the data that we are going to use, in this particular case, images. To do it, we use this cell of code:

In this cell, we define the variable data, considering the folders structure that we have previously created (Train, Validation and Test) and indicating that these folders should be added for their predefined variables: train, valid and test.

When the dataset is defined, we initialize our model. Before this action, we create the model and loading ResNet34 by using this code:

We use cnn_learner and indicate it the dataset (data), the model (model.resnet34) and moreover, we want to know the error_rate which is imported from fastai.metrics. This code line serves for creating and loading the model, so the next step is training it. To do this, we use:

The values in parenthesis indicate the number of “epoch” that we use for training. But what is an epoch? An epoch is the process of transferring all data by Artificial Intelligence. Particularly, all the images located in the training folder go through artificial intelligence. When you make a training with 6 epochs, all images from the folder defined as Train, they will go through 6 times by AI.

Image 39

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial018/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial-resnet34-cuadernos-jupyter-y-google-colaboratoryc009bis/

When the training is finished, we use the final error_rate of the training model. In the case of the top image, the error_rate is 0.003165 or in terms of percentage, 0.31%, approximately. Moreover, it is important to know what is the use of error_rate. Error_rate informs about the precision of the model and it allows us to decide if we should make a training with more epochs or by contrast, using the model. If we are not sure about what means an error of 0.31%, we can print a confusion matrix:

Image 40

Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial-019/)

In this confusion matrix is compared the x-axis (axis X), the predictions of Artificial Intelligence (cats or tiger) in the ordinate axis (axis Y), which is the image. In this aspect, AI makes the right decision in 331 images of cats and 299 images of cats. However, AI predicts in 2 images that the images are cats when images really are tigers. Therefore, the error would be: → 2632=0.316\frac{2}{632} = 0.316% %.

When we consider that the error is acceptable, we keep the model to use it later on.

The path and name of the model are indicated between brackets. When the model is kept, we test it with images which AI has never seen. To do it, we use a new cell code that contains this information:

Image 42

Image credit: Process of detecting intrusions with Artificial Intelligence by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial020/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificialc011/

On this point, we use the stored model in learn and we go through the image of the folder defined as Test, in this case the 520 one. Furthermore, we can print the image in Google Colab to verify if it really happens what AI is going to predict in the next step. For printing the image, we use img.show(). To “say” to artificial intelligence analyses the loaded image, we use this code:

Image 43

Image credit: Process of detecting intrusions with Artificial Intelligence by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial021/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificialc012/

In this respect, we call back to the model with learn and we use predict(img) to make use of the trained model. The result is offered as tensor (0) for cats and tensor (1) for tigers. Moreover, it gives us the possibility of being a tiger (99.9%), between brackets.

7.-Creation and structure of database

It has been created a database, taking into account the need of collecting the data that smartwatches or other related devices offers to test if all these data are authorized and known by the user, or, conversely, they are not authorized. For this reason, we have into account two factors: the data needed to access to the information of the users and the data that constitute a danger because of unauthorized access. In this sense, we focus on data collected by smartwatches with an additional table (THealth).

To design the diagram, we use Drawio, as the previous designs and other software related to databases such as SQL language, MySQL and PhpMyAdmin.

The two fundamental keys of the database are the importance of Booleans data and trivalent logic, about which we shall have more to say later on. But what is the function of Booleans fields? Why we use them in the database? The reason for this is to find valid information in less time, apart from making combined searches if various terms [63].

To clarify ideas on the database, we explain the design and structure of the 5 tables which compose it. The first table is “TDevice”. This table is composed by 3 fields: ID (identity number, located in all tables as a continuous filament), the name of the brand (varchar) and the model (varchar). The second table is “TDeviceType” has 6 fields available: ID and other five Booleans fields (smartwatch, portable PC, desktop computer, tablet and smartphone) which answer true or false depending on the type of device. The third table “TNonAuthorizedAccess” has 4 fields: ID and 3 Boolean ones, camera, microphone and GPS). The aim of this table is knowing if the device has access to some of them. The fourth table defined as “THealth” has 8 fields: ID and 7 Booleans ones, heart frequency, sleep, automatic register of exercise, stress, level of oxygen, menstrual cycle, and steps. It is intended that this table informs us about the information which collects the device. The last one table is “TDetection” and it has only two fields: ID and 1 Boolean (positive). This table is very important because it will be stored, after testing the rest of data from the other tables, if the device is positive or not, that is, if it has or not an intrusion.

Image 44

Image credit: Entity-relationship diagram by Diana Díaz, 2021, Mangosta (https://mangosta.org/diagrama-entidad-relacion/)

As we can see in the previous image, the diagram is composed by three important elements:

  • Entity. It is represented by rectangles which show the names of the tables, for instance, TUnauthorizedAccess.

  • Atributes, with oval shape. They define the features of entity. As an example, we see them represented by nID, 1Camera and so on.

  • Relations, with rhomboid shape. They show the connections with entities. In this case, a connection between two entities will be: TDevice has TUnauthorizedAccess.

Trivalent logic

To design the database, we focus on the speed of queries because it is important to get a good development of the project. For this reason, Boolean fields are used.

Bivalent logic distinguishes two values: true or false, which are represented by 1 and 0, respectively [64]. In this case, we apply ternary or trivalent logic, that is, a logical system in which three values are represented: true, false, or undefined. The third value is interpreted as the possibility of some is neither true or false, but undefined. In this context, the categorised indicator listed as indetermined, showing not value, is represented as: “null”.

Physical model

Once the design is visualized and the structure of tables is analysed, the next step is studying the database from the physical model.

Image 45

Image credit: Physical model by Diana Díaz, 2021, Mangosta (https://mangosta.org/modelo-fisico-2/)

Use of SQL and its database management system

SQL2 (Structured Query Language) became the standard from American National Standards Institute in 1986 and from International Organization for Standardization (ISO) in 1987 [65]. It has taken such prominence because of being a non-procedural language, that is, it specifies what it wants but non how and where to get it. As well it is relationally complete because it allows making queries [66].

Now that we know how SQL works, we can determine the database management system used. To develop the database, we use MySQL. MySQL is a database management system with an open-source relational code. Such an open source should be a model of software based on the open collaborative way, which is focused on practical benefits such as access to the source code [67]. These features make it accessible and practical. In addition, it is a reliable and standardized option for its extensive use. On the other hand, SQLite [68] is a motor of the database because of SQL language is included. Its main advantage is that it does not need a server for working. For this reason, it is very useful to work with applications. Another advantage is the space it occupies (<500 kb) and what makes feasible that the database is manageable in the device rather than in our server.

But why is it notable versus SQLite? The database has been mainly designed with Boolean fields. However, SQLite has not available this option. The only possible option would be converting Booleans to integer in its storage class, slowing down the database. Additionally, SQLite is a system oriented to work with a low volume, so if we enter a large amount of data, its system is not as efficient as we need. Finally, we consider most advantageous continuing with the use of MySQL and server.

Representation of data

For that, we talk about Ckan and Dkan, that are, tools used for the data management in the web. The advantage is that Ckan and Dkan are platforms of open-source code, free and open access. The main difference is that DKAN is a version of CKAN which is developed in Drupal [69].

Image 46

Image credito: Snapshot of DKAN, 2021, Authorship of the National Health Countil: https://getdkan.org/

But what is drupal? Drupal is a contents management system (CMS), changeable that allows a lot of services as publication of public opinion research, forums, articles, and images [70]. Similarly, it is a dynamic system that allows storing data in a database and editing them in a web environment.

Image 47

Image credit: Snapshot of Drupal, 2021, Authorship of Drupal: https://www.drupal.org/

In the situation above, CKAN and DKAN are similar, but DKAN is a priori, a superior version. So, we consider more interesting the use of DKAN platform. It is therefore necessary to highlight that CKAN has an exigent consumption of hardware and inefficient management of security for users and resources. Considering important the use of one of these platforms for offering the user a more evident perspective of data, as well as a full transparency. For this reason, it is a platform used by governments [71] such as Australia’s or Canada’s in addition to non-profits institutions.

8.-Basic codification of the framework in SQL, Python and Java

8.1.-SQL coding

To create the database, we use SQL code. As is clear, we need to use UTF-8, an encoding format of characters Unicode and ISO 10646 [72]. We use UTF-8 for detecting registered words in the database which contains special characters as the use of  letter “Ñ”, for example in the word “sueño”. On the other hand, the table TDevice (the last one in the code) represents the foreign key for being the junction point of the rest of tables.

Image 48

Image credit: SQL coding by Diana Díaz, 2021, Mangosta (https://mangosta.org/iii_codificacion-sql/) (.) Access to code for professors and students: https://mangosta.org/codificacion-sql001/

Image 49

Image credit: SQL coding by Diana Díaz, 2021, Mangosta (https://mangosta.org/iv_codificacion-sql/) (.) Access to code for professors and students: https://mangosta.org/codificacion-sql002/

8.2.-Java coding

In this section, we add the Java code [73] with which we have accessed to the database defined as “security”, as well as the result of making a query in SQL language. Firstly, we import the library java.sql to use the database from Eclipse:

Image 50

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/viii_codificacion-java/) (.) Acceso al código para docentes y estudiantes: https://mangosta.org/codificacion-java/

Moreover, it is necessary to ensure that the JDBC driver is well connected. Next, we make the connection with the database by following these instructions:

Image 51

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/ix_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java001/

The DriverManager class has a String method which has the path of the database. To do it, we use the JDBC connector for MySQL which we installed in the section of Java diagramming. Additionally, we test it on the server of the database in localhost and in the “cloud”, changing the path of the database.

Image 52

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/x_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-002/

By default, MySQL server open the port 3306. It is necessary to connect to this port if we want to consult the database. Consequently, we create the object “Statement”. To do it, we follow these steps:

Image 53

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/xi_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-003/

By this way, the object “Resultset” allows us to get the results from the query. In our case, it shows all the registers from the demanded table, that is, TDevice:

Image 54

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/xii_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-004/

Now, it is necessary to use the loop “while” and the variable “myResulset” to get all the results from the query.

Image 55

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/xiii_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-005/

If it is impossible to establish a connection to the database in Eclipse, when executing in Eclipse it will appear via screen: “Does not work”. After all these steps are completed, we get the table with the registers which we have consulted:

Image 56

Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/codificacion-java-2/)

8.3.-Python coding

In this section, we add the Python code that we have used to access the database defined as “security”, as well as the result of making a query in SQL language. Firstly, we install the connector mysql-connector-python, a driver to communicate with MySQL servers [74].

Image 57

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/i_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/ii_codificacion-python/)

Then, we do the same with PyMySQL, another package used for the interaction with MySQL databases. Once installed, we import it:

Image 58

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/iii_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/iv_codificacion-python/)

Image 59

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/x_codificacion-python-2/) (.) Access to code for professors and students: (https://mangosta.org/iv_codificacion-python/)

Once all packages are installed, we connect to database. To do it, we create a variable called “myConnection” and using PyMySQL, we store the path of the database, user, password and the name of database, that is, the required information to stablish the connection to database.

Image 60

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xi_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/v_codificacion-python/)

The next step is converting this variable in a cursor. To do it, we execute the cursor with the query we want to make.

Image 61

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xii_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/vi_codificacion-python/)


Image 62

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xiii_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/vii_codigo-python/)

To conclude, we print via screen the result of the query. When we finish, we close the connection.

Image 63

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xiv_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/viii_codificacion_python/)

Image 64

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/x_codificacion-python/)

Image 65

Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xv_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/ix_codificacion-python/)


9.-Interfaces

We use them to front-end [75] of the application (UI / UX) [76] or the interactive part in which user interacts with his application and all entries are produced through this layer. The usability and design of the interface environment in different modes of functioning M1, M2 and M3 (which are defined further on, in 10.2) it may let a parental and management control of enough devices. The design of the interface in a key element for a solvent and practical use of the framework.

9.1.-Kivy

Kivy [77] is only technically another library of Python but not just any. It is a framework to create interfaces of user that functions in many platforms as Android, iOS, Windows, Linux and macOS. It can be viewed also as an alternative to React Native [78], Flutter [79], Solar2D and so on [80].

Fundamentally, Kivy is for the enthusiasts of Python and Machine Learning. With theh library of Kivy, the best choice is using Conda [81]. Installing Kivy with Conda is easy: «conda install kivy -c conda-forge». Later, we can compile an application for iOS or Android from source code [82] and execute it in a simulator to visualize its appearance and result.

9.2.-Klotin

Klotin [83] is a programming language of static type [84] that runs on Java Virtual Machine [85] (JVM) and it can also be compiled [86] to source code of JavaScript. Although, it has not a compatible syntax with Java, Klotin is designed to interoperate with Java code, and it is known to be used in the creation of Android applications.

Here are specific directions to codify and develop the graphical and communicative environments with the users of this framework. They are assumptions that should be always presented for developers in the beginning to avoid harmful future technical modifications in the final implemented environments. The framework should never forget its main objective: providing the maximum protection and encouraging social plurality.

  1. Conceptual criteria and the use of gender equality [87] and accessibility [88] as well as put the emphasis on avoiding all type of bias [89] should be clearly stated.

  2. The language, particularly, oriented to minors should be carefully studied for an appropriated use.

  3. The regulatory compliance of data protecting [90] and others, which derive from application, should be a presential imperative.

  4. As far as possible, it must be provided enough educational information about cybersecurity and artificial intelligence.

III.-Stages and modes applicable to the framework and minimum guidelines of requirements

The characterization in stages and modes of this framework is proposed to define systems infrastructure and ensure an appropriate level of procedure. Apart from this, speed is an important feature to execute in different smart devices, as well as the generalized description of all instruments and associated tools.

10.1-Stages

10.1.1-Stages 1 (S1): laboratory mode and AI learning

It is in this sense that can be made configurations, basic and advanced settings. The different setting of frameworks and models help us to optimize and improve the result and subsequently this one and its associated experience will be applied to the stage (S2) or the mode of execution of AI.

To train artificial intelligence we can start from the scratch or make a transference of knowledges with a model, previously trained. We can propose specific models for images because ResNet offers general results, very acceptable, in the proportion of successes in a classification, accuracy [91] or success rate in the prediction. Furthermore, we can use different frameworks, which the most recognised are Pytorh, Tensorflow, Fastai and Keras…Fastai is our framework.

10.1.2-Stages 2 (S2): execution mode of AI

It is the result of the state S1. This state is defined for a trained model and kept for its execution. It should not admit any advanced settings.

10.2-Modes

The modes are the working systems of operating which can be implemented to this framework for smart devices which they are applicable.

10.2.1-Mode 1 (M1): standalone mode

It is subjected to the state S2. Its aim of definition is that it can be downloaded and run on as any other app of market [92]. Mainly, S2 runs on smartphone, tablets, and PC’s. It is recommendable total compatibility of the Mode M1 for the operating systems Android 11 or superior; iOS 14.5.1 or superior; the operating systems Windows 10 or superior or MAX OS X 10.15 or superior. The minimum requirements of hardware will be the recommended ones by their manufacturers.

10.2.2-Mode 2 (M2): connected or slaved mode from ohter device

It is conceived for the connection of how two devices which compatible technologies, for example, Bluetooth, WiFi or other by using app or a complementary satellite software for this purpose. At this point, the most of current smartwatches will be summoned, which are referenced in 11.1x section. The minimum requirements of hardware for the deployment of the framework in mode M2 are: 4GB of internal memory for storage, 512MB of RAM and Bluetooth 4.0.

10.2.3- Mode 3 (M3): server mode or training and learning laboratory

It is mainly conceived in the state S1. Generally, the mode M3 is used for the design and management of AI, as well as the mode that embeds and manages the mode M1 and the mode M2 of all related devices to it. It is from a Client-Server infrastructure [93]. According to its execution, two classes are stablished:

A) Mode 3A: computation in the “cloud”

Analogous to the system that we have proposed in laboratory…It can also be implemented with an ad hoc [94] own infrastructure of a server provider of the framework and assuming the inherent costs of its usability and deplorability. AWS of Amazon, Google Cloud [95], Azure [96] and others are possible candidates for this purpose. For the complementation of the environment, it has to be implemented a XAMPP server [97] or similar in a hosting [98] and adapting services for Servlets of Java [99] to get a complete operativity. For the mode 3B, the requirements of hardware need to be considered, at least, what we used in our free setting in laboratory. The reason is to guarantee a satisfactory execution result of AI and those indicated by the software publishers, which is associated to it.

B) Mode 3B: computation in local

To work in this mode, we are able to do a local installation [100] with fastai and Anaconda or Miniconda under the operating system Linux or Windows. Furthermore, for the own management of database, it is necessary to install MySQL or some similar software. And Java JDK [101] for the development and execution of codification in Java.

For this mode 3B the hardware has to be more specific and demanding, although it should not exceed 2.500 or 3.000 euros to the current market price for an optimum setting of a production environment. This mode has to be capable of executing a software similar to Anaconda Enterprise 4 or 5 [102]. (The requirements are: CPU: 2 x 64-bit 2.8 GHz 8.00 GT/s CPUs; RAM: 32 GB (or 16GB of 1600MHz DDR3 RAM); 300GB of disk storage and access to Internet). It is advised RAM memory of 64 GB and a hard disk of similar characteristics or higher than 970 EVO Plus (1TB) of Samsung for the whole set that the Mode 31b supposes.

GPU [103] is the component that will make the most of slow and large calculations in the process of training a convolutional network [104] . Mainly, these calculations are multiplications of matrixes, convolutions, and activation functions [105]. The hardware provided by NVIDIA, for instance, the model 2080 Ti [106] with 13.45 TFLOPs and 11 GB of memory GDDR6 is a solid base. For CPU, Intel is an unavailable option. The family i7, the i7-10700k, can be a warrantor choice.

IV.-Range of application

The environments in which the framework can be implemented, they will generate own specifications for each environment. Mainly, they are characterized by hardware, in the layer of the model OSI [107] defined by operating, its operating system [108] and connectivity to a defined network and Internet, apart from the connection between devices. Particular attention needs to the speed of execution of the framework, so the applicable procedures should be the most necessaries and light wherever possible and the most optimum speed that can be achieved. All this in the best balance of consumption performance and taking into account the capacities and morphology where it will be used. Some of these environments, where we make descriptions and proposals of software and hardware, by denoting that they are not essential and exclusive in the final implementation.

11.1-Smartwatches

A smartwatch can suppose a style of live, even a concept of freedom; they are such as a plastic housing of watch that carries and camouflages digital technology. Moreover, it is incorporated to a subject as an usual everyday wear, a wearable (wearable technology [109])), “The information that moves with you”. It was the motto of Android Wear [110] in 2014. And, coupled with hand, or with wrist, it can suppose a serious problem of privacy.

A smartwatch has nuances that will make a difference, for example, its design, its size, its user interface [111], if it is associated or not to a smart mobile phone by means of  Bluetooth, if the smartwatch has SIM card or not [112], if it can be managed as a another device in the network (WiFi) or not [113], if it is really a smartwatch or a wrist of activity (fitness [114]); or all at once, or part of them.

A conglomeration of hardware, benefits and market prices are associated to smartwathces and they will determine some elements as screen (acoustic wave [115], resistive or capacity according to touch [116], and panel: LCD [117], IPS [118], AMOLED [119] or OLED [120]), the incorporated sensors, RAM memory, internal memory for storage, microprocessor, (own fabrication, Qualcomm, ARM, MediaTek…), battery life, if it incorporates audio and photographic camera, communication and approximation technology such as NFC [121], WiFi or RFID [122] and so on.

We can find them from the minimum prices (≈,<) a 20 € in the Chinese market in AliExpress (Tipmant Smartwatch) or in Amazon, to prices that can reach up to 2.000 € (TAG HEUER CONNECTED GOLF EDITION) [123], or even more expensive.

If we look ahead for some kind of devices, the main difference could be verifying if they replace the smartphone, if a smartwatch can make and receive calls (using LTE bands [124], for instance, related to democratization of 5G [125]) and The Internet of Things (IoT) [126].  

Normally, they have a reduced version as the operating system of Android, iOS or Linux (Wear OS [127], WatchOs [128] and Tizen OS [129]), they are the most common. This digital technology is the platform where the framework can be executed in a reduced environment of S.O.

11.1.1-Hardware of some of the smartwatches

Table 2

Device

Processor

Memory

Communications

Screen

Camera

Battery

Samsung Gear 2

Dual-Core 1 GHz Exynos3250

4GB internal memory, 512MB RAM

Bluetooth 4.0 LE

1.63” Super AMOLED with resolution 320x320

2.0 Megapixel

300 mAh

Samsung Gear S

Qualcomm Snapdragon 400 a 1 GHz

4GB internal memory, 512MB RAM

Bluetooth 4.0, Wi-Fi, 3G

AMOLED of 2" and 360x480 pixel

300 mAh

Samsung Gear S2.

S.O: Tizen

Exynos 3250

4GB internal memory, 512MB RAM

Bluetooth, BT4.1, Wi-Fi, NFC

sAMOLED
360 X 360 (302 ppi)

250 mAh

Moto 360

SO: Wear OS by Google™

Single-Core 1 GHz TI OMAP 3 / Qualcomm® Snapdragon™ Wear 3100

4GB internal memory, 512MB RAM / 1GB RAM + 8GB internal memory

Bluetooth 4.0 LE / Bluetooth 4.2, Wi-Fi b/g/n, NFC, GPS / GLONASS / Beidou / Galileo

LCD of 1.56" with resolution of 320x290, 205 ppi, LCD with backlighting / 1.2"" Circular AMOLED(390x390)

320 mAh / 355mAh

LG G Watch

SO: Android 4.3 in advance

Qualcomm Snapdragon 400 a 1.2 GHz

4GB internal memory, 512MB RAM

Bluetooth 4.0

1.65" IPS LCD

400 mAh

LG G Watch R

Qualcomm Snapdragon 400 a 1.2 GHz

4GB internal memory, 512MB RAM

Bluetooth 4.0

P-OLED of 1.3" and 320x320 pixel

410 mAh

Sony SmartWatch 3

Quad-core ARM Cortex A7 a 1,2GHz

4GB internal memory, 512MB RAM

Bluetooth 4.0, NFC

1.6" and 320x320 pixel

420 mAh

Asus ZenWatch

Qualcomm Snapdragon 400 a 1.2 GHz

4GB internal memory, 512MB RAM

Bluetooth 4.0

AMOLED touchscreen capacitive of 1.63" and 320x320 pixel

369 mAh

Apple Watch

Apple S1

8GB internal memory, 512MB RAM

WiFi and GPS of iPhone

Retina with Force Touch

Up to 18 hours of authonomy

Apple Watch Series 6

SO: iOS 14 or later

S6 64-bit dual-core

32 GB internal memory, 1GB

LTE y UMTS, Wi-Fi, Bluetooth 5.0

Retina OLED LTPO

Up to 18 hours

Credit table 2: updated and improved version of de «Hardware characteristics[130]. Smartwatch. (2021, 15 february). Wikipedia, The Free Encyclopaedia. Dated consulted: 13:44, july 18, 2021 from https://es.wikipedia.org/w/index.php?title=Reloj_inteligente&oldid=133248752

The framework for any smartwatch should take that the state of recommended working is the state S2 (execution of AI) and the mode M2 (intertwined). Dynamically, in accordance with the evolution of technology associated to a smartwatch, the mode M1 (autonomous) can also be a valid option in the medium term.

11.2.1-Smartphones, tablets and PC’s

Currently, these devices have soundly evolved in the high term of the sector. They are guarantor and in general sufficient, with high-performances technological and a low economy of cost associated. Although tablets do not end getting a final place on the market, according to some associated meters (Image 66). Smartphones and PC’S are rivals for them because in relation with technology and price, they are probably losing out. Nowadays, tablets, Are they more a “useful fad” than a real necessity for user? [131]…However, some manufacturers as Microsoft tend to unify the operating systems as Windows 10 and 11 [132] so that its integration is assured at these three levels or families of devices. Everything is scalable and transversal. An all-in-one.

The framework for any element, smartphone, tablet or PC, should assume that the state of recommended working for these one is the state S2 (execution of AI) and the mode M1 (autonomous).

Image 66

Image credit: «Tendencies of Internet 2021. Statistics and facts by countries». VPNMentor. URL: https://es.vpnmentor.com/blog/tendencias-de-internet-estadisticas-y-datos-en-los-estados-unidos-y-el-mundo/, Mangosta (https://mangosta.org/tendencias-2/)

11.2.2.-Automation

Talking about automation is talking about its types of connections for a total and real application of it in conjunction with 5G [125] and the IoT [126] and a structure, more than probably, in «intelligent [133] bus».

Image 67

All sensors and actuators are connected with a bus cable. The whole system is defined as “bus system”. Image credit: «KNX Basic knowledges of the standar KNX». V9-14. KNX.org. Accesed 22/07/2021. Url: https://www.knx.org/wAssets/docs/downloads/Marketing/Flyers/KNX-Basics/KNX-Basics_es.pdf, Mangosta (https://mangosta.org/conocimientos-basicos-2/)

Currently, all space of technology of wireless communications coexists: Bluetooth, WiFi, RFID, NFC and others. Although morphologies of connection for the automation are waiting decision from the industry and the associated standards, which vie among them by defending different interests.

A line of work, for example, Zigbee pursues: «Secure communications with a poor submission rate of data and maximization of the useful life of its batteries» [134] or Z-wave [135]: «a meshed network that uses radio waves of low energy to communicate from a device to another, by allowing the wireless control of house appliances and other devices, such as illumination control, security systems, thermostats, windows, locks, swimming pools and garage door»; or Sigfox [136]: «An operator of global network and developer of the network 0G founded in 2009 which implements wireless networks to connect devices of low consumption as electric meters, alarms centres or smartwatches, which need to be continuously activated and sending small amount of data».

In these technological and extremely tumultuous times, at level of strategies and disturbances, (as a standard and secure value of the communication market for the next 5 years), Bluetooth and WiFi seem to be the most secure and stable bets during this period: at technological and commercial level, of course. And the most of current devices, one (Bluetooth), or other (WiFi), or both implement them: providing of this technology very popular among users and manufacturers.

The framework for any domestic element (different from smartwatch, smartphone, tablet and PC) should assume that the state of recommendable worksfor domotic elements is the state S2 (execution of AI), and available in both modes M1 (autonomous) & M2 (associated).

11.2.2.1-Domotic elements with Android & iOS

The smarthome applications are an evidence. They are undeniable. The operating systems as Android and iOS make a serious contribution. Any domestic or wearable device that we have described is susceptible of implementing these operating systems. They can and have to be protected by the implementation of our framework. According to previously enounced, it can be installed on these technologies.  

To cite some references: Amazon Alexa, Casa, Smartthings, Google Home, LIFX, Houseinhand, KNX, TaHoma by Somfy, Home Connect App, Smart Life, Philips Hue and so on.

11.2.2.2-Cross-platform JRE (Java Runtime Environment) in domotic devices

Java Virtual Machine [85] is a cross-platform environment designed to be portable with independency from the systems and environment which envelops it.

«Java Virtual Machine can be implemented in software, hardware, a development tool or a web browser; it reads and executes precompiled bytecode code that is independent from the platform. JVM provides definitions for a set of instructions, set of registers, format for class files, stack data, a heap with garbage collector and a memory area. Any implementation of JVM, which be approved by SUN, must be capable of execution any class that reaches with the specification.» [85] .

Being this free standing an objective domotic development in the implementation of our Smart Domotic Intrusions Detector.

11.2.2.3-Description of the domotic standard KNX as a regulator framework for domotic devices

The standard of KNX protocol[4] is based on the concept of intelligent bus [133], and its best meaning is understanding it as a «central neuron» that communicates all the elements from a dwelling. This kind of devices connection is undoubtedly conducive for greater application of AI techniques. The ISO standardization (ISO/IEC 145443)[137] governs this typology. Both the capacity of applying AI techniques and standardization, they constituent an ideal platform of application for our Smart Domotic Intrusions Detector, to the implementation and application of the framework defined in the present research work.

These elements should be protected by the Smart Domotic Intrusions Detector: light switches • Light control keyboards •Movement detectors •Presence detectors • Windows and doors contacts •Doorbell of entrance •Water meters, gas, electricity and heat •Surge sensors •External and internal ambient temperature sensors •Temperature sensors in hot water circuits and heating •Modules to preadjust the setting temperature in rooms •Internal and external light sensors •Wind sensors to control blinds and awnings •State indicators or failure in white good appliances •Leakage sensors •Level sensors •Receptors of radio frequency in doors latch •Receptor of the infrared of remote control •Fingerprints readers or electronic cards to access control [138]... and more besides actuators and modules.

All of them have to be protected, and the implemented framework in an intelligent IDS is needed to apply it here.

Image 68

NXnet/IP in the reference model OSI. Image credit: «KNX Basic Knowledge about standard KNX». V9-14. KNX.org. Accessed tol 22/07/2021. Url: https://www.knx.org/wAssets/docs/downloads/Marketing/Flyers/KNX-Basics/KNX-Basics_es.pdf, Mangosta (https://mangosta.org/estandar-knx/)

All tests carried out in laboratory provide enough knowledge to infer that the framework can be embedded in a microcontroller of PIC 18F family [139] PIC18F4550 [140] or other similar with the KNX systematic [141], which is supported by the specific design of electronic circuitry that implements the needed combinational logic.

Image 69

Installation in Protoboard of The Penguin System. Julio F. De la Cruz G. Accessed to 22/07/200121. Url: http://3.bp.blogspot.com/-JtZH7qb-j-w/UhLyWQdHLjI/AAAAAAAAAjQ/cjoKPWOIxsw/s1600/image7923.png, Mangosta (https://mangosta.org/montaje-en-protoboard/)

The design of this domotic device is conceived as an appliance [142] that has, at least, the hardware requirements of a smartwatch to embed the state and mode of inherent working to this one, previously defined, using an encapsulated software or firmware [143].

Its philosophy of working should be based on plug&play technique [144] so it can be an element more of protection, hosted among devices from electrical panel of any house, which normally hosts an ICP [145], that is, a differential [146] and circuit breaker switches [147].

Therefore, the IDS domotic device is conceived as an additional element of security and protection at home or in a work environment which will be implemented by the KNX standard[4].

V.-Conclusion

It has been demonstrated that the use of AI, in large part of its dimensions, can be carried out with academic rigor, definable in the field of mathematics and usable in laboratory. Moreover, the use of AI can be thought, reasoned, and applied.

The framework is motivated by the continuous apparition and improvement of mobile devices in the shape of smartwatches, smartphones, and other similar devices that it has simultaneously favoured a growing and unfair interest to control the applicatives to its users. They are prey of an abusive interference in their lives to measure and manage, telematically, their Internet practices, and also steal them personal data, for instance, when people do exercises that require the measurement of vital signs such as: heartbeats, arterial pressure and oxygen level. The industry, which artfully produces and manages them, is increasingly canalizing financial costs as well as sciences as artificial intelligence (AI), data and strategies flow management to achieve the final result point to the manipulation, sale and kidnapping of people’s lives that use them. Maybe all of this with a protector and last owner: Artificial Intelligence (AI), plenty of obscurity and obscured internal mechanisms or ‘Black Boxes” and from naïve societies, like ours, which are driven by unknowing, probably, the final cause and effect for humanity.

12.1.-Corollary I

We can raise this framework as a space for generating frameworks, analogue to a vectorial space that generates vectorial subspaces [148] and SVintrusioˊn=[Px(1...n),Imgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}} as its homologue generator system [149], pro of the transparency and explicability of technologies such as AI. It is where each analogic, ambiguous, difficultly definable, wide of spectrum and thought, and overwhelming in dimensions, it can be scanned on its more fundamental parameters, as postulates, demonstrates and defends this framework, according to us. Those initial parameters, that a child could draw despite of being an unknown object for him, like it was a squiggle, as a symbol [150] or primary idea about childhood. As he learns and improves “his parameters” of personal definition and values, this definition will be more careful, concrete and adapted to his own reality. Similarly, it occurs in this framework, in its foundations. In order for artificial intelligence could be explainable, transparent and inside of human control. Because an inverse engineering can always return it to homo sapiens world [151], our real world with imperfections and realities.

12.2.-Corollary II

By inference of corollary I, another important worry about artificial intelligence, algorithmic bias [152]: genre, cultural, race, individual disabilities, thoughts, and ideologies…can be mitigated. Trying to achieve a more equitable society in values. Together with its human technological achievements.

12.3.-Corollary III

By inference of the corollaries I and II, Artificial Intelligence (AI) can be humanly educated to attain a higher good of human beings.

Why and how?

It can be educated because we can be selective in those parameters that we want to skew, (like the race of thoroughbred animal). With them, it will be possible to train AI for thinking and deciding in that reasoning determined line, which is almost genetic form, because the parameters suppose quasi the genetic syntheses in laboratory of the thinking of predefined AI, having into account what we want to take or not during the training period.

All human behaviour, or abstract thinking or entelechy, can be scanned (scribbled) by this framework, according to the following algebraic expression:

(Σj=1j=n(PxnPx(n1)))(Px1Pxn)=0(\hspace{0.1cm} \Sigma _{j=1} ^{j=n}\hspace{0.1cm} (\vec Px_n-\vec Px_{(n-1)}))-(\vec Px_1-\vec Px_n)=0 .

On contrary to the main objective of this framework, the latter carried to the most negative and cruel extremes with humanity, can generate dangerous AI: something on which nationalist social movements in Germany [153] and the theory of “superman” also discussed: The Übermensch [154]. All science always has two-faced: a proper and bad use for humanity.

12.4.-Corollary IV

  • Theorem of the ene-dimensional existentialism

Given a framework, such as that described here, all universe of ene-dimensional possibilities can be reflected in graphical plans that represent it in a specific instant T1 for a defined and observable environment. (In an image that tells more than one thousand words…)

Demonstration

The graphics of this framework are completely located in the lineal application f:NR2f: \mathbb N→ \mathbb R^2 …But:

«[…] Taking the coordinates of each vector and its algebraic nature, the real dimension would be:

1) The group of vectors Px(1...n)\vec P x _{(1...n)} {Dim1}N\N=1.

2) The group of vectors Imgyi \vec Img{_y} _i {Dim2}R\R=2.

3) The group of vectors Px\vec P_x {Dim3}N\N=n.

4) El grupo de vectores Imgy \vec Img_y {Dim4}R\R=n. […]».

What it is inferred that ser I⃗mgy \vec Img_yImgy​ {Dim4}R\RR ene-dimensional, it can also be a theorical universe of possibilities, similar to ours, and if we follow the exposed model, it can be non-limited. There, beyond R⃗3\vec R^3R3 our existence, it makes us shiver of doubts and scepticism. However, it is resolved in graphical plans (as our own existence in souvenir photos) by means I⃗mgyi \vec Img{_y} _i Imgy​i​ {Dim2}R\RR=2; that returns it visual and existential: earthly before our eyes. If appropriate, compatible with the «Heisenberg’ Uncertainty Principle » [155].

(Cqd: Quod erat demonstrandum).

If we have learned anything of the history of science is that while metaphysics of greeks [156] and other thinkers cultivated ideas and qualitative abstractions [157], scientists and mathematicians such as Galileo Galilei [158], Isaac Newton [159], Laplace [160], Leibniz [160]and other «giants» took it to the field of quantification [161] and infinitesimally discreet [162] and measurable. To the observable mathematics existentialism [163] of Blaise Pascal.

If we have, surprisedly, learned that a flavored, curious and awaken mind, it can suppose and intuit physical arguments and universal mathematics, without necessity of providing of all information from dataset, which an entity is composed. It did so Philosophiæ naturalis principia mathematica [164] by Newton, someone who gave birth to diferential calculus and glimpsed the mathematical universe in infinite. Following the pathway of his professor, Isaac Barrow [165].

Professors and students are hope of all the human science. The way to learn from errors and the way to follow in the future. Moreover, training and educating also to our imperishable descendants, robotic minds, that we define as «Artificial Intelligence (AI)»… And ensuring a humanly technologically tomorrow, and not, on contrary, a technologically humanly tomorrow because it would be a serious problem.

VI.-Auxiliary systems and services implemented

NextCloud and Gitlab for internal and external use of this publication

VII.-Declaration of conflic of interests

The authors state that they have not any interest conflict as of the date of this release. This publication is not subsidized by any project that could provide financing sources, nor with the support or sponsorship of any brand or similar. Each authors represent itself and act independently. Moreover, it is declared the future intention of launching a software, which is defined in this url: mangosta.org. At the same time, this domine and its web hosting serve as tool of this publication to get improve limited techniques in this platform.  

VIII.-Manifestation of the investigation team who has done this project: professors and students

A scientific reality about cybersecurity and AI what is described here, that is palpable and auditable in a document was found thanks to investigation and education. Because science, we think, it should not be a business for anything more than society and in benefit of humanity, a sign of the major loa and respect to all scientists who made it possible to reach this point of knowledge. Those minds and shoulders of «geniuses giants» on which we get on and all we can walk much faster.

Juan Antonio Lloret Egea, on behalf of all investigation team.

Keywords: #cienciaabierta, #openscience, #investigación, #research, #IA, #AI, #inteligenciaartificial #artificialintelligence, #IDS, #ciberseguridad, #cybersecurity #español #educación #education #enseñanza #skills

Licencia: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

IX.- Acknowlegments

  • IES La Arboleda, Alcorcón, Madrid http://www.laarboleda.es/ for contributing with professors of the IT department, the students of the first year of Development of Multiplatform Applications and the students of the second year of Networking Microcomputer Systems. Furthermore, the support given by the management team of the institute. And resources provided by them, they are exposed in: https://arboledalan.net/ciberseguridad/

  • Regional Library of Murcia, for supporting the diffusion of the activity: https://bibliotecaregional.carm.es/agenda/presentacion-biblia-de-la-ia-ai-bible-publicacion-sobre-inteligencia-artificial/

  • Posthumously to sir Isaac Newton, for teaching us how to look and educate us in the way of doing it: «I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the sea-shore, and diverting yself in now and then finding a smoother preblle or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me» [166].

Connections
A Reply to this Pub
Description

According to the results, there is an evidence of cultural bias for data science in Spanish language. The outcome of the consultation, which carried out on 12 April 2021, confirms that only 10 out of 23.771 datasets “speaks” Spanish.

Comments
0
comment
No comments here
Why not start the discussion?