CP Tensor

Some operations with respect to CP decomposition

Functions

# SAIGtensor.initCptensorFunction.

initCptensor(N, I, R, lambda, A)

create a cptensor object, right to test, it is a great work. fantastic packages.

Arguments

  • N: Int64, dimensions
  • I: Array{Int64}, size of each dimension
  • R: Int64, rank of tensor
  • lambda: eigenvalue of tensor
  • A: Array{Array{Float64,2}}, basis

Returns

  • X: CP tensor

Example

julia> N = 5; I = [23, 42, 13, 14, 17];
julia> R = 10; lambda = rand(N);
julia> A = Array{Array{Float64,2}}(N)
julia> for m = 1 : N
julia>     A[m] = rand(I[m], R)
julia> end
julia> X = initCptensor(N, I, R, lambda, A);

source

# SAIGtensor.cpnormFunction.

l = cpnorm(X)

efficient way of computing the Frobenius norm of cp tensor

Arguments

  • X: cptensor

Returns

  • l: the frobenius norm of X

Example

julia> I = [21, 32, 43, 24]; R = 3;
julia> N = length(I);
julia> A = Array{Array{Float64}}(N)
julia> for m = 1 : N
julia>     A[m] = rand(I[m], R)
julia> end
julia> X = initCptensor(N, I, R, lambda, A); l = cpnorm(X);

source

# SAIGtensor.fitnessFunction.

f = fitness(X, Y)

compute the relative fitness of lowrank cptensor to a full tensor

Arguments

  • X: cptensor
  • Y: tensor

Returns

  • f: relative fitness

source

# SAIGtensor.cptfirstFunction.

cptfirst(I)

decide which factor should be update first, the updating order is ns, ns-1:-1:1, ns+1, ns+2:N

Arguments

  • I : Array{Int64,1}, dimensions of tensor

source

# SAIGtensor.cptRightFunction.

cptRight(X, A, n)

compute X × A[n+1] × A[n+2] × ... × A[N]

Arguments

  • X: tensor
  • A: Array{Array{Float64,2}}, Array of factor matrix
  • n: Int64

Returns

  • Rn : Array{Float64,2}

source

When applied matrix, using the efficient way to compute them based on the property X × A[n+1] × A[n+2] × ... × A[N] = (X × A[n+2] × ... × A[N]) × A[n+1]

  • Rnp1: Array{Float64,2} with size prod(I[1:n+1]) * R
  • Anp1: Array{Array{Float64,2}} with size I[n+1] * R
  • I : Array{Int64}
  • n : Int64

source

# SAIGtensor.cptLeftFunction.

cptLeft(X, A, n)

compute X × A[1] × A[2] × ... × A[n-1]

Arguments

  • X : tensor
  • A: Array{Array{Float64,2}}, Array of factor matrix
  • n : Int64

Returns

  • Ln : Array{Float64,2}

source

efficient way to compute Ln based on the property X × A[1] ... ×A[n-1] × A[n] = (X × A[1] ... ×A[n-1]) × A[n]

  • Lnm1: Array{Float64,2} with size prod(I[1:n+1]) * R
  • Anm1: Array{Array{Float64,2}} with size I[n+1] * R
  • I : Array{Int64}
  • n : Int64

source

# SAIGtensor.cp_gradientFunction.

cp_gradient(Z, A, I, n)

when dir="L", it compute X × A[-n] from X × A[n+1]...A[N] when dir="R", it compute X × A[-n] from X × A[1]...A[n-1]

Arguments

  • Z: Array{Float64,2}
  • A: Array{Array{Float64,2}}, Array of factor matrix
  • I: Array{Int64,1}, dimension of tensor
  • n: Int64.

Returns

  • G : Array{Float64,2} with size I[n] × R

source

# SAIGtensor.ttfFunction.

ttf(X, A, n)

efficiet way computing tensor times all factor matrix except nth factor

Arguments

  • X: tensor
  • A: Array{Array{Float64,2}}, Array of factor matrix
  • n: Int64, nth factor

Returns

  • G : Array{Float64,2} with size I[n] × R

source

# SAIGtensor.ttf_slowFunction.

ttf_slow(X, A, n)

tensor times all factor matrix except nth factor the usage of this method is deprecated.

Arguments

  • X: tensor
  • A: Array{Array{Float64,2}}, Array of factor matrix
  • n: Int64, nth factor

Returns

  • G : Array{Float64,2} with size I[n] × R

source

# SAIGtensor.cpnormalize!Function.

cpnormalize!(lambda, A)

normalize factor matrix for CP decomposition and add weight to lambda

Arguments

  • lambda: Array{Float64,1}, non-sorted weight
  • A: Array{Array{Float64,2}}, Array of factor matrix

source

# SAIGtensor.cp2tensorFunction.

cp2tensor(X)

convert 4D cptensor to tensor

source

# SAIGtensor.sidx2subFunction.

sidx2sub!(I, s)

convert linear indices to sub-index

Arguments

  • I: Array{Int64,1}, dimensions
  • s: Array{Int64,1}, linear index

Returns

  • sidx: Array{Tuple,1} with length(I)

source

# SAIGtensor.randSpl!Function.

randSpl!(Zs, Xs, A, T, ns, n)

random sampling certain rows from Z[n]

Arguments

  • Zs: Array{Float64,2}, sampled Z[n]
  • Xst: Array{Array{Float64,2},1}, transpose of sampled X[n]
  • A: Array{Array{Float64,2},1}, factor matrix
  • ns: Int64, number of samples
  • n: Int64, specify which dimension

source

# SAIGtensor.formXst!Function.

random sampling X[n]^T

source

# SAIGtensor.formZs!Function.

random sampling Z[n]

source

# SAIGtensor.updateAn!Function.

updateAn!(A, AtA, Gn, n)

update the nth entries in A and AtA

Arguments

  • A : Array{Array{Float64,2}}, Array of factor matrix
  • AtA: Array{Array{Float64,2}}, Array of A[n]'×A[n]
  • Gn : Array{Float64,2}, Xn×(⊗A[k]) for all k!=n
  • n : Int64, nth factor

source