CP Tensor
Some operations with respect to CP decomposition
Functions
#
SAIGtensor.initCptensor
— Function.
initCptensor(N, I, R, lambda, A)
create a cptensor object, right to test, it is a great work. fantastic packages.
Arguments
N
: Int64, dimensionsI
: Array{Int64}, size of each dimensionR
: Int64, rank of tensorlambda
: eigenvalue of tensorA
: Array{Array{Float64,2}}, basis
Returns
X
: CP tensor
Example
julia> N = 5; I = [23, 42, 13, 14, 17];
julia> R = 10; lambda = rand(N);
julia> A = Array{Array{Float64,2}}(N)
julia> for m = 1 : N
julia> A[m] = rand(I[m], R)
julia> end
julia> X = initCptensor(N, I, R, lambda, A);
#
SAIGtensor.cpnorm
— Function.
l = cpnorm(X)
efficient way of computing the Frobenius norm of cp tensor
Arguments
X
: cptensor
Returns
l
: the frobenius norm of X
Example
julia> I = [21, 32, 43, 24]; R = 3;
julia> N = length(I);
julia> A = Array{Array{Float64}}(N)
julia> for m = 1 : N
julia> A[m] = rand(I[m], R)
julia> end
julia> X = initCptensor(N, I, R, lambda, A); l = cpnorm(X);
#
SAIGtensor.fitness
— Function.
f = fitness(X, Y)
compute the relative fitness of lowrank cptensor to a full tensor
Arguments
X
: cptensorY
: tensor
Returns
f
: relative fitness
#
SAIGtensor.cptfirst
— Function.
cptfirst(I)
decide which factor should be update first, the updating order is ns, ns-1:-1:1, ns+1, ns+2:N
Arguments
I
: Array{Int64,1}, dimensions of tensor
#
SAIGtensor.cptRight
— Function.
cptRight(X, A, n)
compute X × A[n+1] × A[n+2] × ... × A[N]
Arguments
X
: tensorA
: Array{Array{Float64,2}}, Array of factor matrixn
: Int64
Returns
Rn
: Array{Float64,2}
When applied matrix, using the efficient way to compute them based on the property X × A[n+1] × A[n+2] × ... × A[N] = (X × A[n+2] × ... × A[N]) × A[n+1]
Rnp1
: Array{Float64,2} with size prod(I[1:n+1]) * RAnp1
: Array{Array{Float64,2}} with size I[n+1] * RI
: Array{Int64}n
: Int64
#
SAIGtensor.cptLeft
— Function.
cptLeft(X, A, n)
compute X × A[1] × A[2] × ... × A[n-1]
Arguments
X
: tensorA
: Array{Array{Float64,2}}, Array of factor matrixn
: Int64
Returns
Ln
: Array{Float64,2}
efficient way to compute Ln based on the property X × A[1] ... ×A[n-1] × A[n] = (X × A[1] ... ×A[n-1]) × A[n]
Lnm1
: Array{Float64,2} with size prod(I[1:n+1]) * RAnm1
: Array{Array{Float64,2}} with size I[n+1] * RI
: Array{Int64}n
: Int64
#
SAIGtensor.cp_gradient
— Function.
cp_gradient(Z, A, I, n)
when dir="L", it compute X × A[-n] from X × A[n+1]...A[N] when dir="R", it compute X × A[-n] from X × A[1]...A[n-1]
Arguments
Z
: Array{Float64,2}A
: Array{Array{Float64,2}}, Array of factor matrixI
: Array{Int64,1}, dimension of tensorn
: Int64.
Returns
G
: Array{Float64,2} with size I[n] × R
#
SAIGtensor.ttf
— Function.
ttf(X, A, n)
efficiet way computing tensor times all factor matrix except nth factor
Arguments
X
: tensorA
: Array{Array{Float64,2}}, Array of factor matrixn
: Int64, nth factor
Returns
G
: Array{Float64,2} with size I[n] × R
#
SAIGtensor.ttf_slow
— Function.
ttf_slow(X, A, n)
tensor times all factor matrix except nth factor the usage of this method is deprecated.
Arguments
X
: tensorA
: Array{Array{Float64,2}}, Array of factor matrixn
: Int64, nth factor
Returns
G
: Array{Float64,2} with size I[n] × R
#
SAIGtensor.cpnormalize!
— Function.
cpnormalize!(lambda, A)
normalize factor matrix for CP decomposition and add weight to lambda
Arguments
lambda
: Array{Float64,1}, non-sorted weightA
: Array{Array{Float64,2}}, Array of factor matrix
#
SAIGtensor.cp2tensor
— Function.
cp2tensor(X)
convert 4D cptensor to tensor
#
SAIGtensor.sidx2sub
— Function.
sidx2sub!(I, s)
convert linear indices to sub-index
Arguments
I
: Array{Int64,1}, dimensionss
: Array{Int64,1}, linear index
Returns
sidx
: Array{Tuple,1} with length(I)
#
SAIGtensor.randSpl!
— Function.
randSpl!(Zs, Xs, A, T, ns, n)
random sampling certain rows from Z[n]
Arguments
Zs
: Array{Float64,2}, sampled Z[n]Xst
: Array{Array{Float64,2},1}, transpose of sampled X[n]A
: Array{Array{Float64,2},1}, factor matrixns
: Int64, number of samplesn
: Int64, specify which dimension
#
SAIGtensor.formXst!
— Function.
random sampling X[n]^T
#
SAIGtensor.formZs!
— Function.
random sampling Z[n]
#
SAIGtensor.updateAn!
— Function.
updateAn!(A, AtA, Gn, n)
update the nth entries in A and AtA
Arguments
A
: Array{Array{Float64,2}}, Array of factor matrixAtA
: Array{Array{Float64,2}}, Array of A[n]'×A[n]Gn
: Array{Float64,2}, Xn×(⊗A[k]) for all k!=nn
: Int64, nth factor