# Outliers Learn

#### 2024-06-05

The Outliers Learn R package allows users to learn how the outlier detection algorithms work.

1. The package includes the main functions that have the implementation of the algorithm
2. The package also includes some auxiliary functions used in the main functions that can also be used separately
3. The main functions include a tutorial mode parameter that allows the user to choose if wanted to see the description and a step by step explanation on how the algorithm works.

## Datasets

In the following examples of use, most of these examples will always use the same dataset. This dataset is declared as inputData:

inputData = t(matrix(c(3,2,3.5,12,4.7,4.1,5.2,4.9,7.1,6.1,6.2,5.2,14,5.3),2,7,dimnames=list(c("r","d"))));
inputData = data.frame(inputData);
print(inputData);
#>      r    d
#> 1  3.0  2.0
#> 2  3.5 12.0
#> 3  4.7  4.1
#> 4  5.2  4.9
#> 5  7.1  6.1
#> 6  6.2  5.2
#> 7 14.0  5.3

As it can be seen, this is a bidimensional matrix (data.frame) that has 7 rows. It can be seen more graphically like this:

plot(inputData);

With that being said, the following section will be dedicated to “how to execute” the auxiliary functions.

## Auxiliary functions

In this section, it will be shown how to call the auxiliary functions of the Outliers Learn R package. This includes:

• Distance functions
• euclidean_distance()
• mahalanobis_distance()
• manhattan_dist()
• Statistical Functions
• mean_outliersLearn()
• sd_outliersLearn()
• quantile_outliersLearn()
• Data transforming functions
• transform_to_vector()

First, the distance functions:

• Euclidean Distance (euclidean_distance())
point1 = inputData[1,];
point2 = inputData[4,];
distance = euclidean_distance(point1, point2);
print(distance);
#> [1] 3.640055
• Mahalanobis Distance (mahalanobis_distance())
inputDataMatrix = as.matrix(inputData); #Required conversion for this function
sampleMeans = c();
#Calculate the mean for each column
for(i in 1:ncol(inputDataMatrix)){
column = inputDataMatrix[,i];
calculatedMean = sum(column)/length(column);
sampleMeans = c(sampleMeans, calculatedMean);
}
#Calculate the covariance matrix
covariance_matrix = cov(inputDataMatrix);

distance = mahalanobis_distance(inputDataMatrix[3,], sampleMeans, covariance_matrix);
print(distance)
#> [1] 0.6774662
• Manhattan Distance (manhattan_dist())
distance = manhattan_dist(c(1,2), c(3,4));
print(distance);
#> [1] 4

The statistical functions can be used like this:

• Mean (mean_outliersLearn())
mean = mean_outliersLearn(inputData[,1]);
print(mean);
#> [1] 6.242857
• Standard Deviation (sd_outliersLearn())
sd = sd_outliersLearn(inputData[,1], mean);
print(sd);
#> [1] 3.431308
• Quantile (quantile_outliersLearn())
q = quantile_outliersLearn(c(12,2,3,4,1,13), 0.60);
print(q);
#> [1] 4

Finally, the data-transforming function: - Transform to vector (transform_to_vector())

numeric_data = c(1, 2, 3)
character_data = c("a", "b", "c")
logical_data = c(TRUE, FALSE, TRUE)
factor_data = factor(c("A", "B", "A"))
integer_data = as.integer(c(1, 2, 3))
complex_data = complex(real = c(1, 2, 3), imaginary = c(4, 5, 6))
list_data = list(1, "apple", TRUE)
data_frame_data = data.frame(x = c(1, 2, 3), y = c("a", "b", "c"))

transformed_numeric = transform_to_vector(numeric_data);
print(transformed_numeric);
#> [1] 1 2 3
transformed_character = transform_to_vector(character_data);
print(transformed_character);
#> [1] "a" "b" "c"
transformed_logical = transform_to_vector(logical_data);
print(transformed_logical);
#> [1] 1 0 1
transformed_factor = transform_to_vector(factor_data);
print(transformed_factor);
#> [1] "A" "B" "A"
transformed_integer = transform_to_vector(integer_data);
print(transformed_integer);
#> [1] 1 2 3
transformed_complex = transform_to_vector(complex_data);
print(transformed_complex);
#> [1] 4.123106 5.385165 6.708204
transformed_list = transform_to_vector(list_data);
print(transformed_list);
#> [1] "1"     "apple" "TRUE"
transformed_data_frame = transform_to_vector(data_frame_data);
print(transformed_data_frame);
#>  x1  x2  x3  y1  y2  y3
#> "1" "2" "3" "a" "b" "c"

Now that the auxiliary functions are understood, the main algorithms implemented for outlier detection will be detailed in the following section.

## Main outlier detection methods

The main outlier detection methods implemented in the Outliers Learn package are:

• box_and_whiskers()
• DBSCAN_method()
• knn()
• lof()
• mahalanobis_method()
• z_score_method()

This section will be dedicated on showing how to use this algorithm implementations.

### Box and Whiskers (box_and_whiskers())

With the tutorial mode deactivated and d=2:

boxandwhiskers(inputData,2,FALSE)
#> [1] "Obtained limits: "
#>   d3   r6
#> -0.1 10.4

#> [1] "The value in position 7 with value 14.000 has been detected as an outlier"
#> [1] "It was detected as an outlier because it's value is higher than the top limit 10.400"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "The value in position 9 with value 12.000 has been detected as an outlier"
#> [1] "It was detected as an outlier because it's value is higher than the top limit 10.400"
#> [1] "--------------------------------------------------------------------------------------------"

With the tutorial mode activated and d=2:

boxandwhiskers(inputData,2,TRUE)
#> The tutorial mode has been activated for the box and whiskers algorithm (outlier detection)
#> Before processing the data, we must understand the algorithm and the 'theory' behind it.
#> The algorithm is made up with 4 steps:
#>  Step 1: Determine the degree of outlier or distance at which an event is considered an outlier (arbitrary). We will name it 'd'
#>  Step 2: Sort the data and obtain quartiles
#>  Step 3: Calculate the interval limits for outliers using the equation:
#>      (Q_1 - d * (Q_3 - Q_1), Q_3 + d * (Q_3 - Q_1))
#>  Being Q_1 and Q_3 the 1st and 3rd quartile. Notice that here we use the value 'd' (it affects on the results so it must be carefully chosen)
#>  Step 4: Identify outliers as values that fall outside the interval calculated in step 3
#> Quantiles are elements that allow dividing an ordered set of data into equal-sized parts.
#>  -Quartiles: 4 equal parts
#>  -Deciles: 10 equal parts
#>  -Percentiles: 100 equal parts
#> The function quantile.R that has been developed gives a closer look into how quantiles are calculated:
#> function (data, v)
#> {
#>     data = transform_to_vector(data)
#>     data = sort(data)
#>     nc = length(data) * v
#>     if (is.integer(nc)) {
#>         x = (data[nc] + data[nc + 1])/2
#>     }
#>     else {
#>         x = data[floor(nc) + 1]
#>     }
#>     return(x)
#> }
#> Now we will apply this knowledge to the data given to obtain the outliers
#> Calculating the quantiles with the function quantile() (available on this package)
#> First we calculate the 1st quartile (quantile(data,0.25))
#>  d3
#> 4.1
#> Now we calculate the 3rd quartile (quantile(data, 0.75))
#>  r6
#> 6.2
#> Using the formula given before, we obtain the interval limits:
#>   d3   r6
#> -0.1 10.4
#> Now that we have calculated the limits, we will check if every single value is 'inside' those boundaries obtained.
#> If the value is not included inside the limits, it will be detected as an outlier
#> [1] "Checking value in the position 1. It's value is 3.000"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 2. It's value is 3.500"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 3. It's value is 4.700"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 4. It's value is 5.200"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 5. It's value is 7.100"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 6. It's value is 6.200"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 7. It's value is 14.000"
#> [1] "The value in position 7 with value 14.000 has been detected as an outlier"
#> [1] "It was detected as an outlier because it's value is higher than the top limit 10.400"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 8. It's value is 2.000"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 9. It's value is 12.000"
#> [1] "The value in position 9 with value 12.000 has been detected as an outlier"
#> [1] "It was detected as an outlier because it's value is higher than the top limit 10.400"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 10. It's value is 4.100"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 11. It's value is 4.900"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 12. It's value is 6.100"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 13. It's value is 5.200"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 14. It's value is 5.300"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> The algorithm has ended

### DBSCAN (DBSCAN_method())

With the tutorial mode deactivated:

eps = 4;
min_pts = 3;
DBSCAN_method(inputData, eps, min_pts, FALSE);
#> The point 2 is an outlier
#> The point 7 is an outlier

With the tutorial mode activated:

eps = 4;
min_pts = 3;
DBSCAN_method(inputData, eps, min_pts, TRUE);
#> The tutorial mode has been activated for the DBSCAN algorithm (outlier detection)
#> Before processing the data, we must understand the algorithm and the 'theory' behind it.
#> The DBSCAN algorithm is based in this steps:
#>  Step 1: Initializing parameters
#>  Max distance threshold: 4.0000
#>  MinPts: 3.0000
#>  Step 2: Executing main loop
#>      If a point has already been visited, it skips to the next point.
#>      It then finds all neighbors of the current point within a distance of max_distance_threshold using the Euclidean distance function.
#>      If the number of neighbors is less than min_pts, the point is marked as noise (-1) and the loop proceeds to the next point.
#>      Otherwise, a new cluster is created, and the current point is assigned to this cluster.
#>      The algorithm then iterates over the neighbors of the current point, marking them as visited and recursively expanding the neighborhood.
#>      If a neighbor already belongs to a cluster, it assigns the same cluster id to the current point.
#>      After processing all points, the algorithm checks for outliers (points marked as -1) in the visited_array.
#>  Step 3: Identifying outliers
#>      If a point is marked as noise (-1), it is identified as an outlier.
#> With this simple steps explained, let's see how this is executed over the dataset given
#> Checking if the point 1 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Point 1 neighbors:
#> [1] 1 3 4
#> Is length of neighbors smaller than min_pts?
#> [1] "It's bigger, adding the point 1 to a cluster"
#> Executing the expandCluster() functionality
#> [1] "Adding point 1 to cluster 1"
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 1 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Checking every single neighbor for the point"
#> Process finished for this point, skipping to next point
#> ------------------------------------------------------
#> Checking if the point 2 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Point 2 neighbors:
#> [1] 2
#> Is length of neighbors smaller than min_pts?
#> [1] "It's smaller, classifying the point 2 as an outlier and skipping to next point"
#> ------------------------------------------------------
#> Checking if the point 3 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Point 3 neighbors:
#> [1] 1 3 4 5 6
#> Is length of neighbors smaller than min_pts?
#> [1] "It's bigger, adding the point 3 to a cluster"
#> Executing the expandCluster() functionality
#> [1] "Adding point 3 to cluster 2"
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 1 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 3 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Checking every single neighbor for the point"
#> [1] "Checking every single neighbor for the point"
#> Process finished for this point, skipping to next point
#> ------------------------------------------------------
#> Checking if the point 4 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Point 4 neighbors:
#> [1] 1 3 4 5 6
#> Is length of neighbors smaller than min_pts?
#> [1] "It's bigger, adding the point 4 to a cluster"
#> Executing the expandCluster() functionality
#> [1] "Adding point 4 to cluster 3"
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 1 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 3 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 4 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Checking every single neighbor for the point"
#> Process finished for this point, skipping to next point
#> ------------------------------------------------------
#> Checking if the point 5 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Point 5 neighbors:
#> [1] 3 4 5 6
#> Is length of neighbors smaller than min_pts?
#> [1] "It's bigger, adding the point 5 to a cluster"
#> Executing the expandCluster() functionality
#> [1] "Adding point 5 to cluster 4"
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 3 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 4 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 5 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> Process finished for this point, skipping to next point
#> ------------------------------------------------------
#> Checking if the point 6 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Point 6 neighbors:
#> [1] 3 4 5 6
#> Is length of neighbors smaller than min_pts?
#> [1] "It's bigger, adding the point 6 to a cluster"
#> Executing the expandCluster() functionality
#> [1] "Adding point 6 to cluster 5"
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 3 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 4 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 5 belongs to another cluster."
#> [1] "Checking every single neighbor for the point"
#> [1] "Neighbor 6 belongs to another cluster."
#> Process finished for this point, skipping to next point
#> ------------------------------------------------------
#> Checking if the point 7 has already been visited
#> [1] "It has not been visited"
#> Calculate the distance between this point and the rest of the points. This is the equivalent to the RangeQuery() functionality
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Bigger, not adding to neighbors"
#> Checking if the euclidean distance is less than the max_distance_threshold
#> [1] "Smaller, adding to neighbors"
#> Point 7 neighbors:
#> [1] 7
#> Is length of neighbors smaller than min_pts?
#> [1] "It's smaller, classifying the point 7 as an outlier and skipping to next point"
#> ------------------------------------------------------
#> Checking the visited array looking for the points classified as outliers
#> The point 2 is an outlier
#> The point 7 is an outlier
#> The algorithm has ended

### KNN (knn())

With the tutorial mode deactivated, K=2 and d=3:

knn(inputData,3,2,FALSE)
#> The point 2 is an outlier
#> The point 7 is an outlier

With the tutorial mode activated, K=2 and d=3

knn(inputData,3,2,TRUE)
#> The tutorial mode has been activated for the KNN algorithm (outlier detection)
#> Before processing the data, we must understand the algorithm and the 'theory' behind it.
#> The knn algorithm to detect outliers is a method based on proximity. This algorithm has 2 main steps:
#>  Step A: Determine the degree of outlier or distance at which an event is considered an outlier (arbitrary)
#>      Substep a: Arbitrarily determine the degree of outlier or distance at which an event is considered an outlier (we will name it 'd')
#>      Substep b: Arbitrarily determine the order number, or K, of the nearest neighbor for which an event must have a degree of outlier to be considered an outlier
#>  Step B: Identify outliers using the k-Nearest Neighbors (k-NN) algorithm
#>      Substep a: Calculate Euclidean distances between all data points
#>      Substep b: Sort the neighbors of each point until reaching K
#>      Substep c: Identify outliers as events whose Kth neighbor is at a distance greater than the defined degree of outlier
#> We must define euclidean distance between 2 points (point A & point B for example). The formula is:
#>  sqrt((B_x - A_x)^2 + (B_y-A_y)^2)
#> Being A_x and B_x the x components of the A and B points. A_y and B_y are the y components of the A and B points
#> Now that we know how the algorithm works, let's apply it to our data.
#>
#> First we must calculate the euclidean distance between every single point in the data
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 1 (3.000,2.000): 0.000"
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 2 (3.500,12.000): 10.012"
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 3 (4.700,4.100): 2.702"
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 4 (5.200,4.900): 3.640"
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 5 (7.100,6.100): 5.798"
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 6 (6.200,5.200): 4.525"
#> [1] "Euclidean distance between point 1 (3.000,2.000) & point 7 (14.000,5.300): 11.484"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 1 (3.000,2.000): 10.012"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 2 (3.500,12.000): 0.000"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 3 (4.700,4.100): 7.991"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 4 (5.200,4.900): 7.301"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 5 (7.100,6.100): 6.912"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 6 (6.200,5.200): 7.316"
#> [1] "Euclidean distance between point 2 (3.500,12.000) & point 7 (14.000,5.300): 12.456"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 1 (3.000,2.000): 2.702"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 2 (3.500,12.000): 7.991"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 3 (4.700,4.100): 0.000"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 4 (5.200,4.900): 0.943"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 5 (7.100,6.100): 3.124"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 6 (6.200,5.200): 1.860"
#> [1] "Euclidean distance between point 3 (4.700,4.100) & point 7 (14.000,5.300): 9.377"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 1 (3.000,2.000): 3.640"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 2 (3.500,12.000): 7.301"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 3 (4.700,4.100): 0.943"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 4 (5.200,4.900): 0.000"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 5 (7.100,6.100): 2.247"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 6 (6.200,5.200): 1.044"
#> [1] "Euclidean distance between point 4 (5.200,4.900) & point 7 (14.000,5.300): 8.809"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 1 (3.000,2.000): 5.798"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 2 (3.500,12.000): 6.912"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 3 (4.700,4.100): 3.124"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 4 (5.200,4.900): 2.247"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 5 (7.100,6.100): 0.000"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 6 (6.200,5.200): 1.273"
#> [1] "Euclidean distance between point 5 (7.100,6.100) & point 7 (14.000,5.300): 6.946"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 1 (3.000,2.000): 4.525"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 2 (3.500,12.000): 7.316"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 3 (4.700,4.100): 1.860"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 4 (5.200,4.900): 1.044"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 5 (7.100,6.100): 1.273"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 6 (6.200,5.200): 0.000"
#> [1] "Euclidean distance between point 6 (6.200,5.200) & point 7 (14.000,5.300): 7.801"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 1 (3.000,2.000): 11.484"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 2 (3.500,12.000): 12.456"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 3 (4.700,4.100): 9.377"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 4 (5.200,4.900): 8.809"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 5 (7.100,6.100): 6.946"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 6 (6.200,5.200): 7.801"
#> [1] "Euclidean distance between point 7 (14.000,5.300) & point 7 (14.000,5.300): 0.000"
#> The distances matrix obtained is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000 10.012492 2.7018512 3.6400549 5.798276 4.525483 11.484337
#> [2,] 10.012492  0.000000 7.9906195 7.3006849 6.911584 7.316420 12.455521
#> [3,]  2.701851  7.990620 0.0000000 0.9433981 3.124100 1.860108  9.377100
#> [4,]  3.640055  7.300685 0.9433981 0.0000000 2.247221 1.044031  8.809086
#> [5,]  5.798276  6.911584 3.1240999 2.2472205 0.000000 1.272792  6.946222
#> [6,]  4.525483  7.316420 1.8601075 1.0440307 1.272792 0.000000  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> We order the distances by columns and show the outliers
#> The distances matrix sorted in step 1 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000 10.012492 2.7018512 3.6400549 5.798276 4.525483 11.484337
#> [2,]  2.701851  0.000000 7.9906195 7.3006849 6.911584 7.316420 12.455521
#> [3,]  3.640055  7.990620 0.0000000 0.9433981 3.124100 1.860108  9.377100
#> [4,]  4.525483  7.300685 0.9433981 0.0000000 2.247221 1.044031  8.809086
#> [5,]  5.798276  6.911584 3.1240999 2.2472205 0.000000 1.272792  6.946222
#> [6,] 10.012492  7.316420 1.8601075 1.0440307 1.272792 0.000000  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> The Kth neighbor for the point 1 has a value of 2.702
#> The distance is smaller than the value stablished in 'd' so it's not an outlier.
#> The point 1 is not an outlier
#> The distances matrix sorted in step 2 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000  0.000000 2.7018512 3.6400549 5.798276 4.525483 11.484337
#> [2,]  2.701851  6.911584 7.9906195 7.3006849 6.911584 7.316420 12.455521
#> [3,]  3.640055  7.300685 0.0000000 0.9433981 3.124100 1.860108  9.377100
#> [4,]  4.525483  7.316420 0.9433981 0.0000000 2.247221 1.044031  8.809086
#> [5,]  5.798276  7.990620 3.1240999 2.2472205 0.000000 1.272792  6.946222
#> [6,] 10.012492 10.012492 1.8601075 1.0440307 1.272792 0.000000  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> The Kth neighbor for the point 2 has a value of 6.912
#> The distance is greater than the value stablished in 'd' so it's an outlier.
#> The point 2 is an outlier
#> The distances matrix sorted in step 3 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000  0.000000 0.0000000 3.6400549 5.798276 4.525483 11.484337
#> [2,]  2.701851  6.911584 0.9433981 7.3006849 6.911584 7.316420 12.455521
#> [3,]  3.640055  7.300685 1.8601075 0.9433981 3.124100 1.860108  9.377100
#> [4,]  4.525483  7.316420 2.7018512 0.0000000 2.247221 1.044031  8.809086
#> [5,]  5.798276  7.990620 3.1240999 2.2472205 0.000000 1.272792  6.946222
#> [6,] 10.012492 10.012492 7.9906195 1.0440307 1.272792 0.000000  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> The Kth neighbor for the point 3 has a value of 0.943
#> The distance is smaller than the value stablished in 'd' so it's not an outlier.
#> The point 3 is not an outlier
#> The distances matrix sorted in step 4 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000  0.000000 0.0000000 0.0000000 5.798276 4.525483 11.484337
#> [2,]  2.701851  6.911584 0.9433981 0.9433981 6.911584 7.316420 12.455521
#> [3,]  3.640055  7.300685 1.8601075 1.0440307 3.124100 1.860108  9.377100
#> [4,]  4.525483  7.316420 2.7018512 2.2472205 2.247221 1.044031  8.809086
#> [5,]  5.798276  7.990620 3.1240999 3.6400549 0.000000 1.272792  6.946222
#> [6,] 10.012492 10.012492 7.9906195 7.3006849 1.272792 0.000000  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> The Kth neighbor for the point 4 has a value of 0.943
#> The distance is smaller than the value stablished in 'd' so it's not an outlier.
#> The point 4 is not an outlier
#> The distances matrix sorted in step 5 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000  0.000000 0.0000000 0.0000000 0.000000 4.525483 11.484337
#> [2,]  2.701851  6.911584 0.9433981 0.9433981 1.272792 7.316420 12.455521
#> [3,]  3.640055  7.300685 1.8601075 1.0440307 2.247221 1.860108  9.377100
#> [4,]  4.525483  7.316420 2.7018512 2.2472205 3.124100 1.044031  8.809086
#> [5,]  5.798276  7.990620 3.1240999 3.6400549 5.798276 1.272792  6.946222
#> [6,] 10.012492 10.012492 7.9906195 7.3006849 6.911584 0.000000  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> The Kth neighbor for the point 5 has a value of 1.273
#> The distance is smaller than the value stablished in 'd' so it's not an outlier.
#> The point 5 is not an outlier
#> The distances matrix sorted in step 6 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000  0.000000 0.0000000 0.0000000 0.000000 0.000000 11.484337
#> [2,]  2.701851  6.911584 0.9433981 0.9433981 1.272792 1.044031 12.455521
#> [3,]  3.640055  7.300685 1.8601075 1.0440307 2.247221 1.272792  9.377100
#> [4,]  4.525483  7.316420 2.7018512 2.2472205 3.124100 1.860108  8.809086
#> [5,]  5.798276  7.990620 3.1240999 3.6400549 5.798276 4.525483  6.946222
#> [6,] 10.012492 10.012492 7.9906195 7.3006849 6.911584 7.316420  7.800641
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641  0.000000
#> The Kth neighbor for the point 6 has a value of 1.044
#> The distance is smaller than the value stablished in 'd' so it's not an outlier.
#> The point 6 is not an outlier
#> The distances matrix sorted in step 7 is:
#>           [,1]      [,2]      [,3]      [,4]     [,5]     [,6]      [,7]
#> [1,]  0.000000  0.000000 0.0000000 0.0000000 0.000000 0.000000  0.000000
#> [2,]  2.701851  6.911584 0.9433981 0.9433981 1.272792 1.044031  6.946222
#> [3,]  3.640055  7.300685 1.8601075 1.0440307 2.247221 1.272792  7.800641
#> [4,]  4.525483  7.316420 2.7018512 2.2472205 3.124100 1.860108  8.809086
#> [5,]  5.798276  7.990620 3.1240999 3.6400549 5.798276 4.525483  9.377100
#> [6,] 10.012492 10.012492 7.9906195 7.3006849 6.911584 7.316420 11.484337
#> [7,] 11.484337 12.455521 9.3770998 8.8090862 6.946222 7.800641 12.455521
#> The Kth neighbor for the point 7 has a value of 6.946
#> The distance is greater than the value stablished in 'd' so it's an outlier.
#> The point 7 is an outlier

### LOF simplified (lof())

With the tutorial mode deactivated, K=3 and the threshold set to 0.5:

lof(inputData, 3, 0.5, FALSE);
#> [1] "Threshold selected: 0.500000"
#> The point 1 is an outlier because its ard is lower than 0.500000
#> The point 1 has an average relative density of 0.3506
#> The point 2 is an outlier because its ard is lower than 0.500000
#> The point 2 has an average relative density of 0.1743
#> The point 7 is an outlier because its ard is lower than 0.500000
#> The point 7 has an average relative density of 0.2434

With the tutorial mode activated and same input parameters:

lof(inputData, 3, 0.5, TRUE);
#> The tutorial mode has been activated for the simplified LOF algorithm (outlier detection)
#> Before processing the data, we must understand the algorithm and the 'theory' behind it.
#> This is a simplified version of the LOF algorithm. This version detects outliers going though this steps:
#>  1) Calculate the degree of outlier of each point by obtaining the density of each point. This has 4 substeps:
#>      a. Determine the 'order number' (K) or closest neighbor that will be used to calculate the density of each number (arbitrary)
#>      b. Calculate the distance between each point and the resto of the points, this distance is calculated with the Manhattan distance equation/function:
#>          The equation is this: manhattanDistance(A,B) = |A_x - B_x| + |A_y - B_y|
#>      c.  Calculate the cardinal for each point: N is the set that contains the neighbors which distance xi is the same or less than the K nearest neighbor.
#>      d.  Calculate the density for each point. This is a technique very close to the proximity.
#>          The function to calculate the density is this: density*italic(x[i], K) == (frac(sum(italic(x[j]) %in% N(italic(x[i], K)), distance(italic(x[i]), italic(x[j]))), cardinalN(italic(x[i], K)))^-1
#>  2) Calculate the average relative density for each point using the next equation:
#>      ard*italic(x[i], K) == frac(density*italic(x[i], K), frac(sum(italic(x[j]) %in% N(italic(x[i], K)), density*italic(x[j], K)), cardinalN(italic(x[i], K))))
#>      This calculates the proportion between a point and the average mean of the densities of the set N that defines that point using the order number K. The average distance will tend to 0 on the outliers.
#>  3)  Obtain the outliers: will classify a point as an outlier when the average relative density is significantly smaller than the rest of the elements in the sample
#>       In the current LOF simplified implemented algorithm, it has been chosen to implement this last step with a threshold specified by the user
#>       This threshold value is compared to each ARD calculated for each point. If the value is smaller than the threshold, then the point is classified as an outlier
#>       On the other hand, if the value is greater or equal to the threshold, the point is classified as an  inlier (a normal point)
#> Now that we understand how the algorithm works, it will be executed to the input data with the parameters that have been set
#> Calculate Euclidean distances between all points:
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 1
#> [1] "Calculated distance: 0.0000"
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 2
#> [1] "Calculated distance: 10.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 3
#> [1] "Calculated distance: 3.8000"
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 4
#> [1] "Calculated distance: 5.1000"
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 5
#> [1] "Calculated distance: 8.2000"
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 6
#> [1] "Calculated distance: 6.4000"
#> Calculating distance between points (manhattan distance):
#> [1] 1
#> [1] 7
#> [1] "Calculated distance: 14.3000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 1
#> [1] "Calculated distance: 10.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 2
#> [1] "Calculated distance: 0.0000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 3
#> [1] "Calculated distance: 9.1000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 4
#> [1] "Calculated distance: 8.8000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 5
#> [1] "Calculated distance: 9.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 6
#> [1] "Calculated distance: 9.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 2
#> [1] 7
#> [1] "Calculated distance: 17.2000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 1
#> [1] "Calculated distance: 3.8000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 2
#> [1] "Calculated distance: 9.1000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 3
#> [1] "Calculated distance: 0.0000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 4
#> [1] "Calculated distance: 1.3000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 5
#> [1] "Calculated distance: 4.4000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 6
#> [1] "Calculated distance: 2.6000"
#> Calculating distance between points (manhattan distance):
#> [1] 3
#> [1] 7
#> [1] "Calculated distance: 10.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 1
#> [1] "Calculated distance: 5.1000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 2
#> [1] "Calculated distance: 8.8000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 3
#> [1] "Calculated distance: 1.3000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 4
#> [1] "Calculated distance: 0.0000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 5
#> [1] "Calculated distance: 3.1000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 6
#> [1] "Calculated distance: 1.3000"
#> Calculating distance between points (manhattan distance):
#> [1] 4
#> [1] 7
#> [1] "Calculated distance: 9.2000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 1
#> [1] "Calculated distance: 8.2000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 2
#> [1] "Calculated distance: 9.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 3
#> [1] "Calculated distance: 4.4000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 4
#> [1] "Calculated distance: 3.1000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 5
#> [1] "Calculated distance: 0.0000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 6
#> [1] "Calculated distance: 1.8000"
#> Calculating distance between points (manhattan distance):
#> [1] 5
#> [1] 7
#> [1] "Calculated distance: 7.7000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 1
#> [1] "Calculated distance: 6.4000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 2
#> [1] "Calculated distance: 9.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 3
#> [1] "Calculated distance: 2.6000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 4
#> [1] "Calculated distance: 1.3000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 5
#> [1] "Calculated distance: 1.8000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 6
#> [1] "Calculated distance: 0.0000"
#> Calculating distance between points (manhattan distance):
#> [1] 6
#> [1] 7
#> [1] "Calculated distance: 7.9000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 1
#> [1] "Calculated distance: 14.3000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 2
#> [1] "Calculated distance: 17.2000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 3
#> [1] "Calculated distance: 10.5000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 4
#> [1] "Calculated distance: 9.2000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 5
#> [1] "Calculated distance: 7.7000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 6
#> [1] "Calculated distance: 7.9000"
#> Calculating distance between points (manhattan distance):
#> [1] 7
#> [1] 7
#> [1] "Calculated distance: 0.0000"
#> The calculated matrix of distances is:
#>      [,1] [,2] [,3] [,4] [,5] [,6] [,7]
#> [1,]  0.0 10.5  3.8  5.1  8.2  6.4 14.3
#> [2,] 10.5  0.0  9.1  8.8  9.5  9.5 17.2
#> [3,]  3.8  9.1  0.0  1.3  4.4  2.6 10.5
#> [4,]  5.1  8.8  1.3  0.0  3.1  1.3  9.2
#> [5,]  8.2  9.5  4.4  3.1  0.0  1.8  7.7
#> [6,]  6.4  9.5  2.6  1.3  1.8  0.0  7.9
#> [7,] 14.3 17.2 10.5  9.2  7.7  7.9  0.0
#> After calculating the distances between points, we calculate the cardinal for each point
#> To do this, we need to sort the distance matrix (by columns)
#> The distance matrix sorted by columns is as follows:
#>      [,1] [,2] [,3] [,4] [,5] [,6] [,7]
#> [1,]  0.0  0.0  0.0  0.0  0.0  0.0  0.0
#> [2,]  3.8  8.8  1.3  1.3  1.8  1.3  7.7
#> [3,]  5.1  9.1  2.6  1.3  3.1  1.8  7.9
#> [4,]  6.4  9.5  3.8  3.1  4.4  2.6  9.2
#> [5,]  8.2  9.5  4.4  5.1  7.7  6.4 10.5
#> [6,] 10.5 10.5  9.1  8.8  8.2  7.9 14.3
#> [7,] 14.3 17.2 10.5  9.2  9.5  9.5 17.2
#> We obtain a vector of the cardinals
#> In column
#> [1] 1
#> Cardinal calculated:
#> [1] 2
#> In column
#> [1] 2
#> Cardinal calculated:
#> [1] 2
#> In column
#> [1] 3
#> Cardinal calculated:
#> [1] 2
#> In column
#> [1] 4
#> Cardinal calculated:
#> [1] 2
#> In column
#> [1] 5
#> Cardinal calculated:
#> [1] 2
#> In column
#> [1] 6
#> Cardinal calculated:
#> [1] 2
#> In column
#> [1] 7
#> Cardinal calculated:
#> [1] 2
#> The cardinals vector resulting is:
#> [1] 2 2 2 2 2 2 2
#> With the obtained cardinals, we get the densities of each point:
#> For point
#> [1] 1
#> Value of density:
#> [1] 0.2247191
#> For point
#> [1] 2
#> Value of density:
#> [1] 0.1117318
#> For point
#> [1] 3
#> Value of density:
#> [1] 0.5128205
#> For point
#> [1] 4
#> Value of density:
#> [1] 0.7692308
#> For point
#> [1] 5
#> Value of density:
#> [1] 0.4081633
#> For point
#> [1] 6
#> Value of density:
#> [1] 0.6451613
#> For point
#> [1] 7
#> Value of density:
#> [1] 0.1282051
#> All densities calculated:
#> [1] 0.2247191 0.1117318 0.5128205 0.7692308 0.4081633 0.6451613 0.1282051
#> With the calculated densities, we are going to calculate the average relative density (ard) for each point:
#> For point:
#> [1] 1
#> Average Relative Density calculated:
#> [1] 0.3505618
#> For point:
#> [1] 2
#> Average Relative Density calculated:
#> [1] 0.1743017
#> For point:
#> [1] 3
#> Average Relative Density calculated:
#> [1] 0.7251462
#> For point:
#> [1] 4
#> Average Relative Density calculated:
#> [1] 1.328571
#> For point:
#> [1] 5
#> Average Relative Density calculated:
#> [1] 0.5771572
#> For point:
#> [1] 6
#> Average Relative Density calculated:
#> [1] 1.095914
#> For point:
#> [1] 7
#> Average Relative Density calculated:
#> [1] 0.2434295
#> All the ards calculated:
#> [1] 0.3505618 0.1743017 0.7251462 1.3285714 0.5771572 1.0959140 0.2434295
#> The last step is to classify the outliers comparing the ards calculated with the threshold
#> [1] "Threshold selected: 0.500000"
#> The point 1 is an outlier because its ard is lower than 0.500000
#> The point 1 has an average relative density of 0.3506
#> The point 2 is an outlier because its ard is lower than 0.500000
#> The point 2 has an average relative density of 0.1743
#> The point 7 is an outlier because its ard is lower than 0.500000
#> The point 7 has an average relative density of 0.2434

### Mahalanobis Method (mahalanobis_method())

With the tutorial mode deactivated and alpha set to 0.7:

mahalanobis_method(inputData, 0.7, FALSE);
#> Critical Value:
#> [1] 0.7133499
#> The observation 1 is an outlier
#> The values of the observation are:
#> r d
#> 3 2
#> The observation 2 is an outlier
#> The values of the observation are:
#>    r    d
#>  3.5 12.0
#> The observation 7 is an outlier
#> The values of the observation are:
#>    r    d
#> 14.0  5.3

With the tutorial mode activated and same value of alpha:

mahalanobis_method(inputData, 0.7, TRUE);
#> The tutorial mode has been activated for the Mahalanobis Distance Outlier Detection Method
#> Before processing the data, we must understand the algorithm and the 'theory' behind it.
#> The algorithm is made up with 6 steps:
#>  1)Check if the input value 'alpha' is in the desired range
#>      If this is true (between 0 and 1), then continue to the next step. If the value is greater than 1 or smaller than 0, end the algorithm.
#>      The concept of the input parameter alpha is the proportion of observations used for the estimation of the critical value (distance value calculated with a chi-squared distribution using alpha)
#>  2)Calculate the mean for each column of the dataset.
#>      In other words, calculate the mean value for each 'dimension' of the dataset.
#>      This is done by adding all the values in every single column and then dividing by the number of elements that have been added.
#>      With this step, the algorithm now has available a vector of means (each position is the mean of the column of the vector/array position).
#>  3)Calculate the covariance matrix.
#>      The covariance matrix is a square matrix with diagonal elements that represent the variance and the non-diagonal components that express covariance.
#>      The covariance of a variable can take any real value (a positive covariance suggests that the two variables have a positive relationship. On the other hand, a negative value indicates that they don't have a positive relationship. If they don't vary together, they have a zero value).
#>      The implementation chosen for this algorithm due to the fact that it's not relevant the implementation of this function is with a R native function.
#>      It's important to know what is the covariance matrix but, because of the nature of the Outliers Learn R package, it's not crucial to implement this function from scratch (it's one of the only 2 functions that have not been implemented from scratch in the R package).
#>  4)Obtain the Mahalanobis squared distances vector.
#>      This is one of the most 'crucial' steps of the Mahalanobis distance method for outlier detection.
#>      It's important to highlight that the Mahalanobis distance function has been implemented from scratch due to the importance of it for the algorithm.
#>      Even though there is an implementation to obtain the Mahalanobis squared distances from a dataset in R, this function has been implemented because it's a really important key concept the reader has to be able to 'see' implemented and be able to use it.
#>      The implementation calculates the Mahalanobis distance from a point to the mean using the covariance matrix using this formula:
#>          D = sqrt((X-means)'*inverted_cov_matrix*(X-means))
#>      Going back to what to do in this step: calculate the Mahalanobis distance between each point and the 'center' using the mean vector and the covariance matrix calculated in steps 2) and 3) with the previous formula.
#>      With the distances calculated, elevate them to square so that the distances vector is D^2.
#>  5)Calculate the critical value
#>      With the Mahalanobis squared distances calculated, the next step is to calculate the critical value.
#>      This is done with a chi-squared distribution.
#>      The function used in the implementation is an R native function due to the complexity of it.
#>      The corresponding function returns the critical value such that the probability of a chi-squared random variable with degrees of freedom equal to the dimensions of the input dataset exceeding this value is alpha (explained briefly in the first step).
#>  6)Classify the points using the critical value
#>      With the critical value calculated, the last step is to check every single distance calculated and if the value is greater than the critical value, the point associated with the distance is classified as an outlier.
#>      If not, the point associated with the distance is classified as an inlier (not an outlier).
#> With the theory understood, we will apply this knowledge to the data given to obtain the outliers
#> ----------------------------------------------------------
#> Check if the input value alpha is smaller or equal to 1.
#> If this is true, then continue to the next step. If the value is greater than 1, end the algorithm.
#> Calculate the mean for each column of the dataset.
#> [1] "Calculated mean for column 1: 6.242857"
#> [1] "Calculated mean for column 2: 5.657143"
#> Mean vector calculated:
#> [1] 6.242857 5.657143
#> Calculate the covariance matrix.
#> Covariance Matrix calculated:
#>            r          d
#> r 13.7361905 -0.7861905
#> d -0.7861905  9.5228571
#> Obtain the Mahalanobis squared distances vector.
#> [1] "Mahalanobis distance for point 1: 1.524336"
#> [1] "Mahalanobis distance for point 2: 2.141261"
#> [1] "Mahalanobis distance for point 3: 0.677466"
#> [1] "Mahalanobis distance for point 4: 0.386744"
#> [1] "Mahalanobis distance for point 5: 0.281100"
#> [1] "Mahalanobis distance for point 6: 0.149734"
#> [1] "Mahalanobis distance for point 7: 2.093187"
#> The distances vector (D) is:
#> [1] 1.5243357 2.1412611 0.6774662 0.3867442 0.2811000 0.1497338 2.0931872
#> Square the Mahalanobis distances.
#> The squared_distance vector (D^2) is:
#> [1] 2.32359939 4.58499903 0.45896039 0.14957109 0.07901719 0.02242021 4.38143269
#> Calculate the critical value.
#> [1] "Degrees of freedom: 2"
#> [1] "Alpha value: 0.700000"
#> [1] "1-alpha = 0.300000"
#> Critical Value:
#> [1] 0.7133499
#> Classify points based on the critical value
#> The observation 1 is an outlier (squared distance 2.323599 is greater than the critical value 0.713350
#> The values of the observation are:
#> r d
#> 3 2
#> The observation 2 is an outlier (squared distance 4.584999 is greater than the critical value 0.713350
#> The values of the observation are:
#>    r    d
#>  3.5 12.0
#> The observation 7 is an outlier (squared distance 4.381433 is greater than the critical value 0.713350
#> The values of the observation are:
#>    r    d
#> 14.0  5.3
#> The algorithm has ended

### Z-score method (z_score_method())

With the tutorial mode deactivated and d set to 2:

z_score_method(inputData,2,FALSE);
#> [1] "Limits: "
#>         r1         r1
#> -0.3915861 12.2915861

#> [1] "The value in position 7 with value 14.000 has been detected as an outlier"
#> [1] "It was detected as an outlier because it's value is higher than the top limit 12.292"
#> [1] "--------------------------------------------------------------------------------------------"

With the tutorial mode activated and same value of d:

z_score_method(inputData,2,TRUE);
#> The tutorial mode has been activated for the standard deviation method algorithm (outlier detection)
#> Before processing the data, we must understand the algorithm and the 'theory' behind it.
#> Identification of outliers using Statistics and Standard Deviation involves the following steps:
#>  1. Determination of the degree of outlier (We will call it 'd')
#>  2. Obtain the arithmetic mean with the following formula:
#>      mean = sum(x) / N
#>   We calculate the mean adding all the values from the data and dividing for the length of the data
#>  3. Obtain the standard deviation with the following formula:
#>      sd = sqrt(sum((x - mean)^2) / N)
#>  We calculate the sum of every single element of the data minus the mean elevated to 2. Then we divide it for the data length
#>  4. Calculate the interval limits using the following equation:
#>      (mean - d * sd, mean + d * sd)
#>  5. Identification of outliers as values that fall outside the interval calculated in step 4.
#> Now that we know how to apply this algorithm, we are going to see how it works with the given data:
#>   r1   r2   r3   r4   r5   r6   r7   d1   d2   d3   d4   d5   d6   d7
#>  3.0  3.5  4.7  5.2  7.1  6.2 14.0  2.0 12.0  4.1  4.9  6.1  5.2  5.3
#> The degree of outlier selected ('d') selected is:
#> [1] 2
#> First we calculate the mean using the formula described before:
#>   r1
#> 5.95
#> Now we calculate the standard deviation using the formula described before:
#>       r1
#> 3.170793
#> With those values calculated, we obtain the limits:
#> First we calculate the lower limit
#>  mean-stddev * d
#>         r1
#> -0.3915861
#> Now we calculate the top limit
#>  mean+stddev*d
#>       r1
#> 12.29159
#> This are the obtained limits
#>         r1         r1
#> -0.3915861 12.2915861
#> Now that we have calculated the limits, we will check if every single value is 'inside' those boundaries obtained.
#> If the value is not included inside the limits, it will be detected as an outlier
#> [1] "Checking value in the position 1. It's value is 3.000"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 2. It's value is 3.500"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 3. It's value is 4.700"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 4. It's value is 5.200"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 5. It's value is 7.100"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 6. It's value is 6.200"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 7. It's value is 14.000"
#> [1] "The value in position 7 with value 14.000 has been detected as an outlier"
#> [1] "It was detected as an outlier because it's value is higher than the top limit 12.292"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 8. It's value is 2.000"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 9. It's value is 12.000"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 10. It's value is 4.100"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 11. It's value is 4.900"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 12. It's value is 6.100"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 13. It's value is 5.200"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> [1] "Checking value in the position 14. It's value is 5.300"
#> [1] "Not an outlier, it's inside the limits"
#> [1] "--------------------------------------------------------------------------------------------"
#> The algorithm has ended