Title: | Calculate and Rectify Moran's I |
---|---|
Description: | Provides a scaling method to obtain a standardized Moran's I measure. Moran's I is a measure for the spatial autocorrelation of a data set, it gives a measure of similarity between data and its surrounding. The range of this value must be [-1,1], but this does not happen in practice. This package scale the Moran's I value and map it into the theoretical range of [-1,1]. Once the Moran's I value is rescaled, it facilitates the comparison between projects, for instance, a researcher can calculate Moran's I in a city in China, with a sample size of n1 and area of interest a1. Another researcher runs a similar experiment in a city in Mexico with different sample size, n2, and an area of interest a2. Due to the differences between the conditions, it is not possible to compare Moran's I in a straightforward way. In this version of the package, the spatial autocorrelation Moran's I is calculated as proposed in Chen(2013) <arXiv:1606.03658>. |
Authors: | Ivan Fuentes, Thomas DeWitt, Thomas Ioerger, Michael Bishop |
Maintainer: | Ivan Fuentes <[email protected]> |
License: | GPL (>= 2) |
Version: | 2.3.0 |
Built: | 2025-02-25 03:30:44 UTC |
Source: | https://github.com/cran/Irescale |
buildStabilityTable
finds how many iterations are necessary to achieve stability in resampling method, plotting in a log scale.
buildStabilityTable(data, times = 10, samples = 100, plots = TRUE, scalingUpTo = "Quantile")
buildStabilityTable(data, times = 10, samples = 100, plots = TRUE, scalingUpTo = "Quantile")
data |
data structure after loading the file using |
times |
the number of times |
samples |
size of the resampling method. The default value is 1000 |
plots |
to draw the significance plot |
scalingUpTo |
the rescaling could be done up to the 0.01% and 99.99% quantile or max and min values. The two possible options are: "MaxMin", or "Quantile". The default value for this parameter is "Quantile" |
A vector with the average averages I
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) resultsChen<-buildStabilityTable(data=data,times=10,samples=100,plots=TRUE,scalingUpTo="Quantile")
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) resultsChen<-buildStabilityTable(data=data,times=10,samples=100,plots=TRUE,scalingUpTo="Quantile")
buildStabilityTableForCorrelation
finds how many iterations are necessary to achieve stability in resampling method, plotting in a log scale.
buildStabilityTableForCorrelation(data, times = 10, samples = 100, plots = TRUE)
buildStabilityTableForCorrelation(data, times = 10, samples = 100, plots = TRUE)
data |
data structure after loading the file using |
times |
the number of times |
samples |
size of the resampling method. The default value is 1000 |
plots |
to draw the significance plot |
A vector with the average averages I
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) resultsChen<-buildStabilityTableForCorrelation(data=data,times=10,samples=100,plots=TRUE)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) resultsChen<-buildStabilityTableForCorrelation(data=data,times=10,samples=100,plots=TRUE)
calculateDistMatrixFromBoard
calculates the distance matrix when the field is divided in a matrix shape (rows and columns). This board could have different number of columns for each row.
For example:
1 | 1 | 1 | 1 | 1 | 1 |
2 | 2 | 2 | 2 | 2 | |
3 | 3 | 3 | 3 | 3 | |
4 | 4 | 4 | 4 | 4 | |
The dimension of obtained squared matrix is given by the square of the maximumn dimension of the original matrix. In the previous example, the matrix will have a size of (36,36).
calculateDistMatrixFromBoard(data)
calculateDistMatrixFromBoard(data)
data |
is a 2D data structure. |
distM the distance between each cell.
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadChessBoard(fileInput) distM<-calculateEuclideanDistance(data$data)
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadChessBoard(fileInput) distM<-calculateEuclideanDistance(data$data)
calculateEuclideanDistance
Computes the euclidean distance betwen all pairs of nodes provided in the input vector.
calculateEuclideanDistance(data)
calculateEuclideanDistance(data)
data |
2D data structure for latitute and longitute respectively. |
Computes the euclidean distance, , matrix between each pair of points.
Matrix, of size , with the distance between all the pair of points.
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data<-loadFile(fileInput) distM<-calculateEuclideanDistance(data$data)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data<-loadFile(fileInput) distM<-calculateEuclideanDistance(data$data)
calculateLocalI
calculates the local Moran's I without rescaling
calculateLocalI(z, distM, scaling = TRUE)
calculateLocalI(z, distM, scaling = TRUE)
z |
vector with the var of interest |
distM |
distance matrix |
scaling |
to scale the variable of interest. The default value is set to TRUE |
a vector with the local Moran's I
fileInput <- system.file("testdata", "chen.csv", package="Irescale") input <- loadFile(fileInput) distM<-calculateEuclideanDistance(input$data) localI<-calculateLocalI(input$varOfInterest,distM)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") input <- loadFile(fileInput) distM<-calculateEuclideanDistance(input$data) localI<-calculateLocalI(input$varOfInterest,distM)
calculateManhattanDistance
Calculates the manhattan distance between each pair of nodes.
calculateManhattanDistance(data)
calculateManhattanDistance(data)
data |
2D structure with n rows and 2 colums that represents the coordinate in a plane. |
Matrix, of size , with the distance between each the pair of points.
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadChessBoard(fileInput) distM<-calculateManhattanDistance(data$data)
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadChessBoard(fileInput) distM<-calculateManhattanDistance(data$data)
calculateMoranI
Moran's I computing method.
calculateMoranI(distM, varOfInterest, scaling = TRUE)
calculateMoranI(distM, varOfInterest, scaling = TRUE)
distM |
the distance matrix. Altough the equation asks for weighted distant matrix, the paramenter that is required is only the distance matrix because this procedure calculate calculates the weighted distance mantrix by itself. |
varOfInterest |
the variable of interest to calculate Moran's I. |
scaling |
if the values are previously scaled, set this parameter to False. The default value is TRUE. |
Moran's I
Chen Y (2013). “New approaches for calculating Moran’s index of spatial autocorrelation.” PloS one, 8(7), e68336.
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest)
calculatePvalue
calculates a p-value for the null hypothesis.
calculatePvalue(sample, value, mean)
calculatePvalue(sample, value, mean)
sample |
the vector that will be used as reference. |
value |
the value of interest. |
mean |
the mean of interest. |
fileInput<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(fileInput) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest, n=1000) # This is the permutation statsVI<-summaryVector(vI) corrections<-iCorrection(I,vI) pv<-calculatePvalue(corrections$scaledData,corrections$newI,corrections$summaryScaledD$mean)
fileInput<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(fileInput) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest, n=1000) # This is the permutation statsVI<-summaryVector(vI) corrections<-iCorrection(I,vI) pv<-calculatePvalue(corrections$scaledData,corrections$newI,corrections$summaryScaledD$mean)
calculateWeightedDistMatrix
The weighted matrix is used as a standardized version of the distance matrix.
calculateWeightedDistMatrix(distM)
calculateWeightedDistMatrix(distM)
distM |
2D matrix with the distance among all pair of coordinates. |
Computes the similarity matrix of the distance by taking the reciprocal of the distance . A value of Zero is assigned when this value can not be calculated.
The whole reciprocal matrix is scaled by dividing each value by the sum of all the elements of the matrix.
weighted distance matrix. The sum of this matrix is 1.
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data<-loadFile(fileInput) distM<-calculateEuclideanDistance(data$data) distW<-calculateWeightedDistMatrix(distM)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data<-loadFile(fileInput) distM<-calculateEuclideanDistance(data$data) distW<-calculateWeightedDistMatrix(distM)
convexHull
Computes the area and centroid of the convex hull from the (latitute, longitude) vector.
It provides a plot of how the points are dispersed in the field of interest.
convexHull(X, varOfInterest)
convexHull(X, varOfInterest)
X |
dataframe with two colums, latitute and longitude respectively. |
varOfInterest |
variable of interest to plot. This variable is needed to color the points on the convexhull. |
Consideration for this function:
It makes usage of chull from rgeos and Polygon from graphics.
The centroid of the polygon is calculated by averaging the vertices of it.
The shown plot uses the basic plot
command.
A vector with two elements, the first element is the area and the second one is the centroid.
The centroid is a list of two elements, latitude and longitude that represents the centroid.
To have a visual idea of the returned object, it has the following shape .
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data<-loadFile(fileInput) area_centroid<-convexHull(data$data,data$varOfInterest)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data<-loadFile(fileInput) area_centroid<-convexHull(data$data,data$varOfInterest)
coor
Transforms a x,y position in a cartesian plane into a position in a 1D array.
coor(i, j, size)
coor(i, j, size)
i |
the value of the row. |
j |
the value of the column. |
size |
the maximum between row and columns of the matrix. |
an integer value that represents the position in the array.
pos<-coor(1,1,10)
pos<-coor(1,1,10)
expectedValueI
Calculates the expected value for local I
expectedValueI(W)
expectedValueI(W)
W |
Weighted Distance Matrix. |
Expected Value
W<-matrix(1:100,nrow=10,ncol=10) evI<-expectedValueI(W)
W<-matrix(1:100,nrow=10,ncol=10) evI<-expectedValueI(W)
iCorrection
. consists in centering the I value (I-median) and scaling by the difference between the median and 1st or 99th quantile. The correction is according to the following equation:
iCorrection(I, vI, statsVI, scalingUpTo = "Quantile", sd = 1)
iCorrection(I, vI, statsVI, scalingUpTo = "Quantile", sd = 1)
I |
Moran's I, It could be computed using calculateMoranI function. |
vI |
the vector obtained by resamplingI. |
statsVI |
the statistic vector obtained from summaryVector. |
scalingUpTo |
the rescaling could be done up to the 0.01% and 99.9% quantile or max and min values. The two possible options are: "MaxMin", or "Quantile". The default value for this parameter is Quantile. |
sd |
this represents upto which standard deviation you want to scale I |
rescaled I
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest) statsVI<-summaryVector(vI) corrections<-iCorrection(I,vI,scalingUpTo="Quantile")
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest) statsVI<-summaryVector(vI) corrections<-iCorrection(I,vI,scalingUpTo="Quantile")
ItoPearsonCorrelation
It calculates the Null distribution of I and determine what is the percentile of the real value of I,
then It calculates the inverse of the Normal Distribution(qnorm) to obtain the value of R to which this percentile belongs to.
ItoPearsonCorrelation(vI, n, medianCenter = TRUE)
ItoPearsonCorrelation(vI, n, medianCenter = TRUE)
vI |
the vector obtained by resamplingI. |
n |
sample size |
medianCenter |
to center all the values to the median. The defaul value is TRUE |
a list with r correlation equivalence and the rectified vector
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) distM<-calculateEuclideanDistance(data$data) vI<-resamplingI(distM,data$varOfInterest,n = 1000) rectifiedI<- ItoPearsonCorrelation(vI, length(data))
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) distM<-calculateEuclideanDistance(data$data) vI<-resamplingI(distM,data$varOfInterest,n = 1000) rectifiedI<- ItoPearsonCorrelation(vI, length(data))
loadChessBoard
is used when the input file has a 2D shape, this is a board shape, and it is only one variable of interest.
For example:
1 | 1 | 1 | 1 | 1 | 1 |
2 | 2 | 2 | 2 | 2 | |
3 | 3 | 3 | 3 | 3 | |
4 | 4 | 4 | 4 | 4 | |
loadChessBoard(fileName)
loadChessBoard(fileName)
fileName |
the path and file's name to load. |
data frame with two variables, the first variable is a vector with coordinate x (latitude) and y (longitude), the second variable contains the values of the variable of interest.
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadChessBoard(fileInput)
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadChessBoard(fileInput)
LoadDistanceMatrix
Loads the distance matrix, avoiding computing it from latitude and longitude.Loads a distance matrix. Instead of computing the distance from latitute and longitude
LoadDistanceMatrix
Loads the distance matrix, avoiding computing it from latitude and longitude.
loadDistanceMatrix(fileName, colnames = TRUE, rownames = TRUE)
loadDistanceMatrix(fileName, colnames = TRUE, rownames = TRUE)
fileName |
file's name and path to the file |
colnames |
If the first row of the file is the names for the columns. The default value is TRUE |
rownames |
If the first column is the the row names. The default value is TRUE |
The distance matrix
fileInput <- system.file("testdata", "chenDistance.csv", package="Irescale") distM<-loadDistanceMatrix(fileInput)
fileInput <- system.file("testdata", "chenDistance.csv", package="Irescale") distM<-loadDistanceMatrix(fileInput)
loadFile
loads the input file with the following format:
Column 1 represents the sample Id. It has to be Unique.
Column 2,3 Lat/Long respectively.
Column 4 and beyond the variables of interest.
loadFile(fileName)
loadFile(fileName)
fileName |
the file's name and path. |
it returns a data frame with two variables and
. The variable
is a 2D list with the latitude and longitude respectly, while the variable
is a matrix with all the variables to calculate and rescale Moran's I.
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadFile(fileInput)
fileInput <- system.file("testdata", "chessboard.csv", package="Irescale") data<-loadFile(fileInput)
loadSatelliteImage
Loads a Satellite image in PNG format. It does not matter the number of chanells it will return it in grayscale (One channel)
loadSatelliteImage(fileName)
loadSatelliteImage(fileName)
fileName |
file's name and path to the file |
An cimg object in gray scale.
fileInput <- system.file("testdata", "imageGray.png", package="Irescale") img<-loadSatelliteImage(fileInput)
fileInput <- system.file("testdata", "imageGray.png", package="Irescale") img<-loadSatelliteImage(fileInput)
localICorrection
. consists in centering the local I value (I-median) and scaling by the difference between the median and 1st or 99th quantile. The correction is according to the following equation:
localICorrection(localI, vI, statsVI, scalingUpTo = "Quantile")
localICorrection(localI, vI, statsVI, scalingUpTo = "Quantile")
localI |
Local Moran's I, It could be computed using calculateLocalMoranI function. |
vI |
the vector obtained by resamplingI. |
statsVI |
the statistic vector obtained from summaryLocalIVector. |
scalingUpTo |
the rescaling could be done up to the 0.01% and 99.9% quantile or max and min values. The two possible options are: "MaxMin", or "Quantile". The default value for this parameter is Quantile. |
rescaled local I vector
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) localI <- calculateLocalI(input$varOfInterest,distM) vI<-resamplingLocalI(input$varOfInterest,distM) statsVI<-summaryLocalIVector(vI) corrections<-localICorrection(localI,vI,scalingUpTo="Quantile")
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) localI <- calculateLocalI(input$varOfInterest,distM) vI<-resamplingLocalI(input$varOfInterest,distM) statsVI<-summaryLocalIVector(vI) corrections<-localICorrection(localI,vI,scalingUpTo="Quantile")
nullDristribution
Calculate a linear regression between variable of interest and latitude, longitude and latitude*longitude. The residuals of this data set is calculated
The variable of interest is shuffle by numReplicates times and each time the linear regression and residuals are calculated.
At each interation the correlation between the original residuals and the shuffle residuals is calculated
This vector os correlations is returned and plot it as histogram.
nullDristribution(data, numReplicates)
nullDristribution(data, numReplicates)
data |
the distance matrix. Altough the equation asks for weighted distant matrix, the paramenter that is required is only the distance matrix because this procedure calculate calculates the weighted distance mantrix by itself. |
numReplicates |
the variable of interest to calculate Moran's I. |
Histogram and the vector of correlations between residuals
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) c<-nullDristribution(input,1000)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) c<-nullDristribution(input,1000)
plotHistogramOverlayCorrelation
Overlays the histogram and the theorical normal distribution.
plotHistogramOverlayCorrelation(originalVec, vec, I, n, bins = 50, main = "Histogram")
plotHistogramOverlayCorrelation(originalVec, vec, I, n, bins = 50, main = "Histogram")
originalVec |
The original vector of I, it should be sorted. |
vec |
the vector to plot. |
I |
the value of I to plot |
n |
number of observations in the sample. |
bins |
the number of bins for the histogram, The default value is 30. |
main |
the title of the histogram, The default value is "Histogram". |
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) originalI<-resamplingI(distM, input$varOfInterest) correlationI<-ItoPearsonCorrelation(originalI,length(input$varOfInterest)) plotHistogramOverlayCorrelation(originalI,correlationI,I,length(input$varOfInterest))
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) originalI<-resamplingI(distM, input$varOfInterest) correlationI<-ItoPearsonCorrelation(originalI,length(input$varOfInterest)) plotHistogramOverlayCorrelation(originalI,correlationI,I,length(input$varOfInterest))
plotHistogramOverlayNormal
Overlays the histogram and the theorical normal distribution.
plotHistogramOverlayNormal(vec, stats, bins = 50, main = "Histogram")
plotHistogramOverlayNormal(vec, stats, bins = 50, main = "Histogram")
vec |
the vector to plot. |
stats |
the stats obtained from summaryVector. |
bins |
the number of bins for the histogram, The default value is 30. |
main |
the title of the histogram, The default value is "Histogram". |
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest) statsVI<-summaryVector(vI) plotHistogramOverlayNormal(vI,statsVI)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest) statsVI<-summaryVector(vI) plotHistogramOverlayNormal(vI,statsVI)
procrustes
Procrustes distance between two surfaces. The Procrustes distance is used to quantify the similarity or dissimilarity of (3-dimensional) shapes, and extensively used in biological morphometrics.
procrustes(U, V)
procrustes(U, V)
U |
Vector of the first surface. |
V |
Vector of the second surface. |
Procrustes distance
rescaleI
It executes the whole rectifying using theorical R distribution for all the measurements in the csv file.
It plots the histogram with the theorical distribution.
It plots the convexHull for each variable.
It calcualtes the area and centroid of the convex hull for each variable.
It calculates the I and rescale it for every variable.
It returns an object with the computations.
rectifyIrho(data, samples = 10000)
rectifyIrho(data, samples = 10000)
data |
the data frame obtained from |
samples |
number of permutations for the resampling method. |
An object with I, rescaleI and statistic summary for the inputs without scaling, the same statistics after scaling them, the p-value and the convexhull information
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) rectifiedI<-rectifyIrho(data,100)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) rectifiedI<-rectifyIrho(data,100)
distribution.resamplingI
Permute n-1 times the values of the variable of interest to calculate a Null distribution for I. It is done n-1, because one order is the original one, to make sure it is included.
resamplingI(distM, varOfInterest, n = 1000, scaling = TRUE)
resamplingI(distM, varOfInterest, n = 1000, scaling = TRUE)
distM |
the distance matrix. Although the equation requires a weighted distant matrix, the only parameter that will be needed is the distance matrix. This procedure is able to calculate the weighted distance matrix by itself. |
varOfInterest |
the variable name or position of the variable we are interested to calculate the spatial autocorrelation. |
n |
number of permutations. The default value is 1000 |
scaling |
The default value is TRUE. However, if the values are previously scaled, this parameter must be set to FALSE.. |
A vector with the n calculated Moran's I.
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest)
distribution.resamplingLocalI
Permute n-1 times the values of the variable of interest to calculate a Null distribution for I. It is done n-1, because one order is the original one, to make sure it is included.
resamplingLocalI(varOfInterest, distM, n = 1000, scaling = TRUE)
resamplingLocalI(varOfInterest, distM, n = 1000, scaling = TRUE)
varOfInterest |
the variable name or position of the variable we are interested to calculate the spatial autocorrelation. |
distM |
the distance matrix. Although the equation requires a weighted distant matrix, the only parameter that will be needed is the distance matrix. This procedure is able to calculate the weighted distance matrix by itself. |
n |
number of permutations. |
scaling |
The default value is TRUE. However, if the values are previously scaled, this parameter must be set to FALSE.. |
A vector with the n calculated Local Moran's I.
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) vI<-resamplingLocalI(input$varOfInterest,distM,n=100)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) vI<-resamplingLocalI(input$varOfInterest,distM,n=100)
rescaleI
It executes the whole analysis for all the measurements in the field.
It plots the histogram with the theorical distribution.
It plots the convexHull for each variable.
It calcualtes the area and centroid of the convex hull for each variable.
It calculates the I and rescale it for every variable.
It returns an object with the computations.
rescaleI(data, samples = 10000, scalingUpTo = "Quantile", sd = 1)
rescaleI(data, samples = 10000, scalingUpTo = "Quantile", sd = 1)
data |
the data frame obtained from |
samples |
number of permutations for the resampling method. |
scalingUpTo |
the rescaling could be done up to the 0.01% and 99.99% quantile or max and min values. The two possible options are: "MaxMin", or "Quantile". The default value for this parameter is "Quantile" |
sd |
this represents upto which standard deviation you want to scale I |
An object with I, rescaleI and statistic summary for the inputs without scaling, the same statistics after scaling them, the p-value and the convexhull information
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) scaledI<-rescaleI(data,100)
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) scaledI<-rescaleI(data,100)
saveFile
Saves a csv report with the following columns: Convex Hull Area, Convex Hull Centroid X, Convex Hull Centroid Y, Sample Size, Ichen, Iscaled, pvalue , Mean, MeanScaled, STD DEV, SDScaled, Q_1%, Q_1%Scaled, $Q_99%, Q_99%Scaled, Max, Max_Scaled, Min, Min_Scaled, Skew,Skew_Scaled, Kutorsis,Kutorsis_Scaled.
saveFile(fileName, results)
saveFile(fileName, results)
fileName |
the name of the file with the path where the CSV file will be saved. |
results |
is the vector obtained from running the rescaling process over all the variables of interest. |
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) scaledI<-rescaleI(data,1000) fn = file.path(tempdir(),"output.csv",fsep = .Platform$file.sep) saveFile(fn,scaledI) if (file.exists(fn)){ file.remove(fn) }
fileInput <- system.file("testdata", "chen.csv", package="Irescale") data <- loadFile(fileInput) scaledI<-rescaleI(data,1000) fn = file.path(tempdir(),"output.csv",fsep = .Platform$file.sep) saveFile(fn,scaledI) if (file.exists(fn)){ file.remove(fn) }
standardize
Calculates the z-values of the input vector.
#'
standardize(vectorI, W)
standardize(vectorI, W)
vectorI |
vector to be standardized. |
W |
weighed distance matrix |
z values
W<-matrix(runif(100, min=0, max=1),nrow=10,ncol=10) vectorI<-runif(10, min=0, max=1) standardize(vectorI,W)
W<-matrix(runif(100, min=0, max=1),nrow=10,ncol=10) vectorI<-runif(10, min=0, max=1) standardize(vectorI,W)
standardizedByColumn
It considers each column independently to scale them.
standardizedByColumn(M)
standardizedByColumn(M)
M |
Matrix to be scaled by column. |
a matrix scaled by column.
summaryLocalIVector
. Calculates basic statistic of the received Matrix, like mean, standard deviation, maximum, minimum, 0.1% and 99.9% quantile and median.
summaryLocalIVector(vec)
summaryLocalIVector(vec)
vec |
the vector to calculate the summary. |
a list with mean, standard deviation, maximum, minimum, 0.1% and 99.9% quantile and median of the received vector.
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) vI<-resamplingLocalI(input$varOfInterest,distM) statsVI<-summaryLocalIVector(vI)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) vI<-resamplingLocalI(input$varOfInterest,distM) statsVI<-summaryLocalIVector(vI)
summaryVector
. Calculates basic statistic of the received vector, like mean, standard deviation, maximum, minimum, 0.1% and 99.9% quantile and median.
summaryVector(vec)
summaryVector(vec)
vec |
the vector to calculate the summary. |
a list with mean, standard deviation, maximum, minimum, 0.1% and 99.9% quantile and median of the received vector.
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest) statsVI<-summaryVector(vI)
inputFileName<-system.file("testdata", "chen.csv", package="Irescale") input<-loadFile(inputFileName) distM<-calculateEuclideanDistance(input$data) I<-calculateMoranI(distM = distM,varOfInterest = input$varOfInterest) vI<-resamplingI(distM, input$varOfInterest) statsVI<-summaryVector(vI)
transformImageToList
transforms the image into a list with two variables, data and varOfInterest, which are the identificators needed to execute the rectification.
transformImageToList(im)
transformImageToList(im)
im |
cimg object. |
fileInput <- system.file("testdata", "imageGray.png", package="Irescale") img<-loadSatelliteImage(fileInput) data<-transformImageToList(img)
fileInput <- system.file("testdata", "imageGray.png", package="Irescale") img<-loadSatelliteImage(fileInput) data<-transformImageToList(img)
transformImageToMatrix
transforms the image into a 2D matrix.
transformImageToMatrix(im)
transformImageToMatrix(im)
im |
cimg object. |
fileInput <- system.file("testdata", "imageGray.png", package="Irescale") img<-loadSatelliteImage(fileInput) data<-transformImageToList(img)
fileInput <- system.file("testdata", "imageGray.png", package="Irescale") img<-loadSatelliteImage(fileInput) data<-transformImageToList(img)