t3toolbox.tucker_tensor_train.TuckerTensorTrain#
- class t3toolbox.tucker_tensor_train.TuckerTensorTrain#
Tucker tensor train with variable ranks.
Tensor network diagram for a dth order Tucker tensor train:
r0 r1 r2 r(d-1) rd 1 ------ G0 ------ G1 ------ ... ------ G(d-1) ------ 1 | | | | n0 | n1 | nd | | | B0 B1 Bd | | | | N0 | N1 | Nd | | |
Attributes:#
- tucker_coresTuple[NDArray]
Tucker cores: (B0, …, B(d-1)), len=d, elm_shape=VS+(ni, Ni).
- tt_coresTuple[NDArray]
Tensor train cores: (G0, …, G(d-1)), len=d, elm_shape=VS+(ri, ni, r(i+1)).
- d: int
Number of indices of the tensor
- stack_shape: typ.Tuple[int, …]
The stack shape, VS. Non-empty if this object stores many different Tucker tensor trains with the same structure. Shape of the leading parts of tucker_cores[ii].shape and tt_cores[ii].shape.
- shape: typ.Tuple[int,…]
Tensor shape: (N0, N1, …, N(d-1))
- tucker_ranks: typ.Tuple[int,…]
Tucker ranks: (n0, r1, …, n(d-1))
- tt_ranks: typ.Tuple[int, …]
TT ranks: (r0, r1, …, rd)
- structure: typ.Tuple[typ.Tuple[int,…], typ.Tuple[int,…], typ.Tuple[int,…]]
Structure of the Tucker tensor train: (shape, tucker_ranks, tt_ranks)
- data: typ.Tuple[Tuple[NDArray], Tuple[NDArray]]
The cores defining the Tucker tensor train
- minimal_ranks: typ.Tuple[typ.Tuple[int,…], typ.Tuple[int, …]]
Tucker and tensor train ranks of the smallest possible Tucker tensor train that represents the same tensor. Tucker tensor trains may be made to have minimal ranks using T3-SVD.
- has_minimal_ranks: bool
True if this Tucker tensor train’s ranks equal the minimal ranks, False otherwise.
Notes:#
The structure of a Tucker tensor train is defined by:
Tensor shape: (N0, N1, …, N(d-1))
Tucker ranks: (n0, r1, …, n(d-1))
TT ranks: (r0, r1, …, rd)
Typically, the first and last TT-ranks satisfy r0=rd=1, and “1” in the diagram is the number 1. However, it is allowed for these ranks to not be 1, in which case the “1”s in the diagram are vectors of ones.
Many stacked Tucker tensor trains with the same structure may be stored in this object for vectorization. In this case,
tucker_cores[ii].shape = stack_shape + (ni,Ni)
tt_cores[ii].shape = stack_shape + (ri, ni, r(i+1))
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> tucker_cores = (np.ones((4,14)),np.ones((5,15)),np.ones((6,16))) >>> tt_cores = (np.ones((1,4,3)), np.ones((3,5,2)), np.ones((2,6,1))) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) # TuckerTensorTrain, cores filled with ones >>> print(x.d) 3 >>> print(x.shape) (14, 15, 16) >>> print(x.tucker_ranks) (4, 5, 6) >>> print(x.tt_ranks) (1, 3, 2, 1) >>> print(x.uniform_structure) ((14, 15, 16), (4, 5, 6), (1, 3, 2, 1), ())
Example with stacking:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> tucker_cores = [np.ones((6,7, 4,14)),np.ones((6,7, 5,15)),np.ones((6,7, 6,16))] >>> tt_cores = [np.ones((6,7, 1,4,3)), np.ones((6,7, 3,5,2)), np.ones((6,7, 2,6,1))] >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) # TuckerTensorTrain, cores filled with ones >>> print(x.uniform_structure) ((14, 15, 16), (4, 5, 6), (1, 3, 2, 1), (6, 7))
Minimal ranks
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((13,14,15,16), (4,5,6,7), (1,4,9,7,1)) >>> print(x.has_minimal_ranks) True
Using T3-SVD to make equivalent T3 with minimal ranks:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.t3svd as t3svd >>> x = t3.t3_corewise_randn((13,14,15,16), (4,5,6,7), (1,99,9,7,1)) >>> print(x.has_minimal_ranks) False >>> x2 = t3svd.t3svd(x)[0] >>> print(x2.has_minimal_ranks) True
- tucker_cores: t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis]#
- tt_cores: t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis]#
- data() t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis], t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis]]#
- d() int#
- is_empty() bool#
- stack_shape() t3toolbox.backend.common.typ.Tuple[int, Ellipsis]#
If this object contains multiple stacked T3s with the same structure, this is the shape of the stack.
- shape() t3toolbox.backend.common.typ.Tuple[int, Ellipsis]#
- tucker_ranks() t3toolbox.backend.common.typ.Tuple[int, Ellipsis]#
- tt_ranks() t3toolbox.backend.common.typ.Tuple[int, Ellipsis]#
- structure() t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis]]#
- core_shapes() t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], Ellipsis], t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], Ellipsis]]#
- size() int#
- minimal_ranks() t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis]]#
- has_minimal_ranks() bool#
- validate()#
Check internal consistency of the Tucker tensor train.
- __post_init__()#
- to_dense(squash_tails: bool = True, use_jax: bool = False) t3toolbox.backend.common.NDArray#
Contract a Tucker tensor train to a dense tensor.
- Parameters:
x (TuckerTensorTrain) – Tucker tensor train which will be contracted to a dense tensor.
squash_tails (bool, defaults to True) – Whether to contract the leading and trailing 1s with the first and last TT indices.
use_jax (bool, defaults to False) – Whether to use Jax for linear algebra. Default: False (use numpy).
- Returns:
dense_x – Dense tensor represented by x, which has shape (N0, …, N(d-1)) if squash_tails=True, or (r0,N0,…,N(d-1),rd) if squash_tails=False.
- Return type:
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(4,14),randn(5,15),randn(6,16)) >>> tt_cores = (randn(2,4,3), randn(3,5,2), randn(2,6,5)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> x_dense = x.to_dense() # Convert TuckerTensorTrain to dense tensor >>> ((B0,B1,B2), (G0,G1,G2)) = tucker_cores, tt_cores >>> x_dense2 = np.einsum('xi,yj,zk,axb,byc,czd->ijk', B0, B1, B2, G0, G1, G2) >>> print(np.linalg.norm(x_dense - x_dense2) / np.linalg.norm(x_dense)) 7.48952547844518e-16
Example where leading and trailing ones are not contracted
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(4,14),randn(5,15),randn(6,16)) >>> tt_cores = (randn(2,4,3), randn(3,5,2), randn(2,6,2)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> x_dense = x.to_dense(squash_tails=False) # Convert TuckerTensorTrain to dense tensor >>> print(x_dense.shape) (2, 14, 15, 16, 2) >>> ((B0,B1,B2), (G0,G1,G2)) = tucker_cores, tt_cores >>> x_dense2 = np.einsum('xi,yj,zk,axb,byc,czd->aijkd', B0, B1, B2, G0, G1, G2) >>> print(np.linalg.norm(x_dense - x_dense2) / np.linalg.norm(x_dense)) 1.1217675019342066e-15
Example with stacking
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(2,3, 4,10), randn(2,3, 5,11), randn(2,3, 6,12)) >>> tt_cores = (randn(2,3, 2,4,3), randn(2,3, 3,5,2), randn(2,3, 2,6,5)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> x_dense = x.to_dense() # Convert TuckerTensorTrain to dense tensor >>> ((B0,B1,B2), (G0,G1,G2)) = tucker_cores, tt_cores >>> x_dense2 = np.einsum('uvxi,uvyj,uvzk,uvaxb,uvbyc,uvczd->uvijk', B0, B1, B2, G0, G1, G2) >>> print(np.linalg.norm(x_dense - x_dense2) / np.linalg.norm(x_dense)) 1.3614138244072514e-15
- squash_tails(use_jax: bool = False) TuckerTensorTrain#
Make leading and trailing TT ranks equal to 1 (r0=rd=1), without changing tensor being represented.
- Parameters:
x (TuckerTensorTrain) – Tucker tensor train with tt_ranks=(r0,r1,…,r(d-1),rd).
use_jax (bool, defaults to False) – Whether to use Jax for linear algebra. Default: False (use numpy).
- Returns:
squashed_x (TuckerTensorTrain) – Tucker tensor train with tt_ranks=(1,r1,…,r(d-1),1).
See Also
———
TuckerTensorTrain
T3Structure
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(2,3, 4,10), randn(2,3, 5,11), randn(2,3, 6,12)) >>> tt_cores = (randn(2,3, 2,4,3), randn(2,3, 3,5,2), randn(2,3, 2,6,5)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> print(x.tt_ranks) (2, 3, 2, 5) >>> x2 = x.squash_tails() >>> print(x2.tt_ranks) (1, 3, 2, 1) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) 5.805155892491438e-12
- reverse() TuckerTensorTrain#
Reverse Tucker tensor train.
- Parameters:
x (TuckerTensorTrain) –
Tucker tensor train with:
shape=(N0, …, N(d-1)),
tucker_ranks=(n0,…,n(d-1)),
tt_ranks=(1,r1,…,r(d-1),1).
- Returns:
reversed_x –
Tucker tensor train with index order reversed.
shape=(N(d-1), …, N0),
tucker_ranks=(n(d-1),…,n0),
tt_ranks=(1,r(d-1),…,r1,1).
- Return type:
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(2,3, 4,10), randn(2,3, 5,11), randn(2,3, 6,12)) >>> tt_cores = (randn(2,3, 1,4,2), randn(2,3, 2,5,3), randn(2,3, 3,6,4)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> print(x.uniform_structure) ((10, 11, 12), (4, 5, 6), (1, 2, 3, 4), (2,3)) >>> reversed_x = x.reverse() >>> print(reversed_x.uniform_structure) ((12, 11, 10), (6, 5, 4), (4, 3, 2, 1), (2,3)) >>> x_dense = x.to_dense() >>> reversed_x_dense = reversed_x.to_dense() >>> x_dense2 = reversed_x_dense.transpose([0,1, 4,3,2]) >>> print(np.linalg.norm(x_dense - x_dense2)) 1.859018050214056e-13
- change_structure(new_shape: t3toolbox.backend.common.typ.Sequence[int], new_tucker_ranks: t3toolbox.backend.common.typ.Sequence[int], new_tt_ranks: t3toolbox.backend.common.typ.Sequence[int], use_jax: bool = False) TuckerTensorTrain#
Increase Tucker tensor train ranks and/or shape via zero padding.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,6,5), (1,3,2,1)) >>> padded_x = x.change_structure((17,18,17), (8,8,8), (1,5,6,1)) >>> print(padded_x.uniform_structure) ((17, 18, 17), (8, 8, 8), (1, 5, 6, 1), ())
Example where first and last ranks are nonzero:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,6,5), (3,3,2,4)) >>> padded_x = x.change_structure((17,18,17), (8,8,8), (5,5,6,7)) >>> print(padded_x.uniform_structure) ((17, 18, 17), (8, 8, 8), (5, 5, 6, 7), ())
- sum_stack(use_jax: bool = False) TuckerTensorTrain#
If this object contains multiple stacked T3s, this sums them.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.corewise as cw >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_sum = x.sum_stack() >>> tucker_sum = tuple([np.sum(B, axis=(0,1)) for B in x.tucker_cores]) >>> tt_sum = tuple([np.sum(G, axis=(0,1)) for G in x.tt_cores]) >>> x_sum2 = t3.TuckerTensorTrain(tucker_sum, tt_sum) >>> print(cw.corewise_norm(cw.corewise_sub(x_sum.data, x_sum2.data))) 0.0
- unstack()#
If this object contains multiple stacked T3s, this unstacks them into an array-like structure of nested tuples with the same “shape” as self.stack_shape.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(3,5)) >>> unstacked_x = x.unstack() >>> print([len(s) for s in unstacked_x]) [5, 5, 5] >>> tucker13 = tuple([B[1,3] for B in x.tucker_cores]) >>> tt13 = tuple([G[1,3] for G in x.tt_cores]) >>> x13 = t3.TuckerTensorTrain(tucker13, tt13) >>> print((x13 - unstacked_x[1][3]).norm()) 0.0
- __add__(other: TuckerTensorTrain, squash: bool = True, use_jax: bool = False) TuckerTensorTrain#
Add Tucker tensor trains x and y, yielding a Tucker tensor train x+y with summed ranks.
dunder version of
TuckerTensorTrain.add().Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (1,5,6,1)) >>> z = x + y >>> print(z.uniform_structure) ((14, 15, 16), (7, 12, 8), (1, 8, 8, 1), ()) >>> print(np.linalg.norm(x.to_dense() + y.to_dense() - z.to_dense())) 6.524094086845177e-13
T3 + dense
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = np.random.randn(14,15,16) >>> z = x + y >>> print(type(z)) <class 'numpy.ndarray'> >>> print(np.linalg.norm(x.to_dense() + y - z)) 0.0
- __mul__(s) TuckerTensorTrain#
Multipy a Tucker tensor train by a scaling factor.
Scaling is defined with respect to the dense N0 x … x N(d-1) tensor that is represented by the Tucker tensor trains, even though this dense tensor is not formed during computations.
For corewise scaling, see
t3toolbox.corewise.corewise_scale()- Parameters:
x (TuckerTensorTrain) – Tucker tensor train
s (scalar) – scaling factor
- Returns:
Scaled TuckerTensorTrain s*x, with the same structure as x.
- Return type:
- Raises:
ValueError –
Error raised if the TuckerTensorTrains are internally inconsistent
See also
TuckerTensorTrain,t3_add,t3_neg,t3_sub,corewise_scale()Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> s = 3.2 >>> sx = x * s >>> print(np.linalg.norm(s*x.to_dense() - sx.to_dense())) 1.6268482531988893e-13
- __neg__() TuckerTensorTrain#
Scale a Tucker tensor train by -1.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> neg_x = -x >>> print(np.linalg.norm(x.to_dense() + neg_x.to_dense())) 0.0
- __sub__(other: TuckerTensorTrain, squash: bool = True, use_jax: bool = False) TuckerTensorTrain#
Subtract Tucker tensor trains, x - y, yielding a Tucker tensor train with summed ranks.
Subtraction is defined with respect to the dense N0 x … x N(d-1) tensors that are represented by the Tucker tensor trains, even though these dense tensors are not formed during computations.
For corewise subtraction, see
t3toolbox.corewise.corewise_sub()- Parameters:
x (TuckerTensorTrain) – First summand. structure=((N0,…,N(d-1)), (n1,…,nd), (r0, r1,…,rd))
y (TuckerTensorTrain) – Second summand. structure=((N0,…,N(d-1)), (m1,…,md), (q0, q1,…,qd))
squash (bool) – Squash the first and last TT cores so that r0=rd=1 in the result. Default: True.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
- Difference of Tucker tensor trains, x-y.
shape=(N0,…,N(d-1),
tucker_ranks=(n0+m0,…,n(d-1)+m(d-1),
TT ranks=(1, r1+q1,…,r(d-1)+q(d-1),1)) if squash=True,
or (r0+q0, r1+q1,…,r(d-1)+q(d-1),rd+qd)) if squash=False.
- Return type:
- Raises:
ValueError –
Error raised if either of the TuckerTensorTrains are internally inconsistent
Error raised if the TuckerTensorTrains have different shapes.
See also
TuckerTensorTrain,t3_shape,t3_add,t3_scale,t3_neg,corewise_neg()Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (1,5,6,1)) >>> x_minus_y = x - y >>> print(x_minus_y.uniform_structure) ((14, 15, 16), (7, 12, 8), (2, 8, 8, 2), ()) >>> print(np.linalg.norm(x.to_dense() - y.to_dense() - x_minus_y.to_dense())) 3.5875705233607603e-13
- norm(use_orthogonalization: bool = True, use_jax: bool = False)#
Compute Hilbert-Schmidt (Frobenius) norm of a Tucker tensor train.
The Hilbert-Schmidt norm is defined with respect to the dense N0 x … x N(d-1) tensor that is represented by the Tucker tensor trains, even though this dense tensor is not formed during computations.
For corewise norm, see
t3toolbox.corewise.corewise_norm()- Parameters:
x (TuckerTensorTrain) – First Tucker tensor train. shape=(N0,…,N(d-1))
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
Hilbert-Schmidt (Frobenius) norm of Tucker tensor trains, ||x||_HS
- Return type:
scalar
- Raises:
ValueError –
Error raised if the TuckerTensorTrain is internally inconsistent
See also
TuckerTensorTrain,t3_dot_t3,t3toolbox.corewise.corewise_norm()Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> print(x.norm() - np.linalg.norm(x.to_dense())) 9.094947017729282e-13
Stacked:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2), stack_shape=(2,3)) >>> norms_x = x.norm(use_orthogonalization=True) >>> x_dense = x.to_dense() >>> norms_x_dense = np.sqrt(np.sum(x_dense**2, axis=(-3,-2,-1))) >>> print(norms_x - norms_x_dense) [[-1.36424205e-12 -2.50111043e-12 1.36424205e-12] [ 1.59161573e-12 4.09272616e-12 2.72848411e-12]]
- up_svd_ith_tucker_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray]#
Compute SVD of ith tucker backend and contract non-orthogonal factor up into the TT-backend above.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Parameters:
ii (int) – index of tucker backend to SVD
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (r0,r1,…r(d-1),rd))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
new_x (NDArray) – New TuckerTensorTrain representing the same tensor, but with ith tucker backend orthogonal. new_tt_cores[ii].shape = (ri, new_ni, r(i+1)) new_tucker_cores[ii].shape = (new_ni, Ni) new_tucker_cores[ii] @ new_tucker_cores[ii].T = identity matrix
ss_x (NDArray) – Singular values of prior ith tucker backend. shape=(new_ni,).
See also
truncated_svd,left_svd_ith_tt_core,right_svd_ith_tt_core,up_svd_ith_tt_core,down_svd_ith_tt_core,t3_svdExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.up_svd_ith_tucker_core(ind) >>> print(np.linalg.norm(x.to_dense() - x.to_dense())) # Tensor unchanged 5.772851635866132e-13 >>> tucker_cores2, tt_cores2 = x2.data >>> rank = len(ss) >>> B = tucker_cores2[ind] >>> print(np.linalg.norm(B @ B.T - np.eye(rank))) # Tucker backend is orthogonal 8.456498415401757e-16
- left_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray]#
Compute SVD of ith TT-backend left unfolding and contract non-orthogonal factor into the TT-backend to the right.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Parameters:
ii (int) – index of TT-backend to SVD
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (1,r1,…r(d-1),1))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
new_x (NDArray) – New TuckerTensorTrain representing the same tensor, but with ith TT-backend orthogonal. new_tt_cores[ii].shape = (ri, ni, new_r(i+1)) new_tt_cores[ii+1].shape = (new_r(i+1), n(i+1), r(i+2)) einsum(‘iaj,iak->jk’, new_tt_cores[ii], new_tt_cores[ii]) = identity matrix
ss_x (NDArray) – Singular values of prior ith TT-backend left unfolding. shape=(new_r(i+1),).
See also
truncated_svd,left_svd_3tensor,up_svd_ith_tucker_core,right_svd_ith_tt_core,up_svd_ith_tt_core,down_svd_ith_tt_core,t3_svdExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.left_svd_ith_tt_core(ind) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.186463661974644e-13 >>> tucker_cores2, tt_cores2 = x2.data >>> G = tt_cores2[ind] >>> print(np.linalg.norm(np.einsum('iaj,iak->jk', G, G) - np.eye(G.shape[2]))) # TT-backend is left-orthogonal 4.453244025338311e-16
- right_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray]#
Compute SVD of ith TT-backend right unfolding and contract non-orthogonal factor into the TT-backend to the left.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Parameters:
ii (int) – index of TT-backend to SVD
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (1,r1,…r(d-1),1))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
new_x (NDArray) – New TuckerTensorTrain representing the same tensor, but with ith TT-backend orthogonal. new_tt_cores[ii].shape = (new_ri, ni, r(i+1)) new_tt_cores[ii-1].shape = (r(i-1), n(i-1), new_ri) einsum(‘iaj,kaj->ik’, new_tt_cores[ii], new_tt_cores[ii]) = identity matrix
ss_x (NDArray) – Singular values of prior ith TT-backend right unfolding. shape=(new_ri,).
See also
truncated_svd,left_svd_3tensor,up_svd_ith_tucker_core,left_svd_ith_tt_core,up_svd_ith_tt_core,down_svd_ith_tt_core,t3_svdExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.right_svd_ith_tt_core(ind) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.304678679078675e-13 >>> tucker_cores2, tt_cores2 = x2.data >>> G = tt_cores2[ind] >>> print(np.linalg.norm(np.einsum('iaj,kaj->ik', G, G) - np.eye(G.shape[0]))) # TT-backend is right orthogonal 4.207841813173725e-16
- up_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray]#
Compute SVD of ith TT-backend down unfolding and keep non-orthogonal factor with this backend.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Parameters:
ii (int) – index of TT-backend to SVD
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (1,r1,…r(d-1),1))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
new_x (NDArray) – New TuckerTensorTrain representing the same tensor. new_tt_cores[ii].shape = (ri, new_ni, r(i+1)) new_tucker_cores[ii].shape = (new_ni, Ni)
ss_x (NDArray) – Singular values of prior ith TT-backend down unfolding. shape=(new_ri,).
See also
truncated_svd,outer_svd_3tensor,up_svd_ith_tucker_core,left_svd_ith_tt_core,right_svd_ith_tt_core,down_svd_ith_tt_core,t3_svdExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x2, ss = x.up_svd_ith_tt_core(1) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 1.002901486286745e-12
- down_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray]#
Compute SVD of ith TT-backend right unfolding and contract non-orthogonal factor down into the tucker backend below.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Parameters:
ii (int) – index of TT-backend to SVD
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (1,r1,…r(d-1),1))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
new_x (NDArray) – New TuckerTensorTrain representing the same tensor, but with ith TT-backend down orthogonal. new_tt_cores[ii].shape = (ri, new_ni, r(i+1)) new_tucker_cores[ii].shape = (new_ni, Ni) einsum(‘iaj,ibj->ab’, new_tt_cores[ii], new_tt_cores[ii]) = identity matrix
ss_x (NDArray) – Singular values of prior ith TT-backend down unfolding. shape=(new_ni,).
See also
truncated_svd,outer_svd_3tensor,up_svd_ith_tucker_core,left_svd_ith_tt_core,right_svd_ith_tt_core,up_svd_ith_tt_core,t3_svdExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.down_svd_ith_tt_core(ind) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 4.367311712704942e-12 >>> tucker_cores2, tt_cores2 = x2.data >>> G = tt_cores2[ind] >>> print(np.linalg.norm(np.einsum('iaj,ibj->ab', G, G) - np.eye(G.shape[1]))) # TT-backend is down orthogonal 1.0643458053135608e-15
- orthogonalize_relative_to_ith_tucker_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) TuckerTensorTrain#
Orthogonalize all cores in the TuckerTensorTrain except for the ith tucker backend.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Orthogonal is done relative to the ith tucker backend:
ith tucker backend is not orthogonalized
All other tucker cores are orthogonalized.
TT-cores to the left are left orthogonalized.
TT-backend directly above is outer orthogonalized.
TT-cores to the right are right orthogonalized.
- Parameters:
ii (int) – index of tucker backend that is not orthogonalized
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (1,r1,…r(d-1),1))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
new_x – New TuckerTensorTrain representing the same tensor, but orthogonalized relative to the ith tucker backend.
- Return type:
See also
up_svd_ith_tucker_core,left_svd_ith_tt_core,right_svd_ith_tt_core,up_svd_ith_tt_core,down_svd_ith_tt_coreExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x2 = x.orthogonalize_relative_to_ith_tucker_core(1) >>> print(np.linalg.norm(x.to_dense(x) - x2.to_dense(x2))) # Tensor unchanged 8.800032152216517e-13 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> X = np.einsum('xi,axb,byc,czd,zk->iyk', B0, G0, G1, G2, B2) # Contraction of everything except B1 >>> print(np.linalg.norm(np.einsum('iyk,iwk->yw', X, X) - np.eye(B1.shape[0]))) # Complement of B1 is orthogonal 1.7116160385376214e-15
Example where first and last TT-ranks are not 1:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> x2 = x.orthogonalize_relative_to_ith_tucker_core(0) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.152424496985265e-12 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> X = np.einsum('yj,zk,axb,byc,czd->axjkd', B1, B2, G0, G1, G2) # Contraction of everything except B0 >>> print(np.linalg.norm(np.einsum('axjkd,ayjkd->xy', X, X) - np.eye(B0.shape[0]))) # Complement of B1 is orthogonal 2.3594586449868743e-15
- orthogonalize_relative_to_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) TuckerTensorTrain#
Orthogonalize all cores in the TuckerTensorTrain except for the ith TT-backend.
Stacking not supported: the truncated ranks vary based on this T3’s numerical properties.
- Orthogonal is done relative to the ith TT-backend:
All Tucker cores are orthogonalized.
TT-cores to the left are left orthogonalized.
ith TT-backend is not orthogonalized.
TT-cores to the right are right orthogonalized.
- Parameters:
ii (int) – index of TT-backend that is not orthogonalized
x (TuckerTensorTrain) – The Tucker tensor train. structure=((N1,…,Nd), (n1,…,nd), (1,r1,…r(d-1),1))
min_rank (int) – Minimum rank for truncation.
min_rank – Maximum rank for truncation.
rtol (float) – Relative tolerance for truncation.
atol (float) – Absolute tolerance for truncation.
xnp – Linear algebra backend. Default: np (numpy)
See also
up_svd_ith_tucker_core,left_svd_ith_tt_core,right_svd_ith_tt_core,up_svd_ith_tt_core,down_svd_ith_tt_core- Returns:
new_x – New TuckerTensorTrain representing the same tensor, but orthogonalized relative to the ith TT-backend.
- Return type:
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x2 = x.orthogonalize_relative_to_ith_tt_core(1) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 8.800032152216517e-13 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> XL = np.einsum('axb,xi -> aib', G0, B0) # Everything to the left of G1 >>> print(np.linalg.norm(np.einsum('aib,aic->bc', XL, XL) - np.eye(G1.shape[0]))) # Left subtree is left orthogonal 9.820411604510197e-16 >>> print(np.linalg.norm(np.einsum('xi,yi->xy', B1, B1) - np.eye(G1.shape[1]))) # Core below G1 is up orthogonal 2.1875310121178e-15 >>> XR = np.einsum('axb,xi->aib', G2, B2) # Everything to the right of G1 >>> print(np.linalg.norm(np.einsum('aib,cib->ac', XR, XR) - np.eye(G1.shape[2]))) # Right subtree is right orthogonal 1.180550381921849e-15
Example where first and last TT-ranks are not 1:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> x2 = x.orthogonalize_relative_to_ith_tt_core(0) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.4708999671349535e-12 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> XR = np.einsum('yi,zj,byc,czd->bijd', B1, B2, G1, G2) # Everything to the right of G0 >>> print(np.linalg.norm(np.einsum('bijd,cijd->bc', XR, XR) - np.eye(G0.shape[2]))) # Right subtree is right orthogonal 8.816596607002667e-16
- up_orthogonalize_tucker_cores(use_jax: bool = False) TuckerTensorTrain#
Orthogonalize Tucker cores upwards, pushing remainders onto TT cores above.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.up_orthogonalize_tucker_cores() >>> print((x - x_orth).norm()) 4.420285752780219e-12 >>> ind = 1 >>> B = x_orth.data[0][ind] >>> print(np.linalg.norm(B @ B.T - np.eye(B.shape[0]))) 1.2059032102772812e-15
Stacked:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.up_orthogonalize_tucker_cores() >>> print((x - x_orth).norm()) [[2.27267321e-12 1.92787570e-12 1.60830015e-12] [9.54262022e-13 1.45211899e-12 3.27867574e-12]] >>> ind = 1 >>> B = x_orth.data[0][ind] >>> BtB = np.einsum('abio,abjo->abij',B,B) >>> errs = [[np.linalg.norm(BtB[ii,jj] - np.eye(BtB.shape[-1])) for jj in range(3)] for ii in range(2)] >>> print(np.linalg.norm(errs)) 4.118375471407983e-15
- down_orthogonalize_tt_cores(use_jax: bool = False)#
Outer orthogonalize TT cores, pushing remainders downward onto tucker cores below.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.down_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) 1.927414448489825e-12 >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('iaj,ibj->ab',G,G)-np.eye(G.shape[1]))) 1.9491561709929213e-15
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.down_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) [[1.65714673e-12 1.52503536e-12 2.94647811e-12] [1.56839190e-12 2.61963262e-12 8.78269349e-12]] >>> ind = 1 >>> G = x_orth.data[1][ind] >>> GdG = np.einsum('xyaib,xyajb->xyij',G,G) >>> errs = [[np.linalg.norm(GdG[ii,jj] - np.eye(GdG.shape[-1])) for jj in range(3)] for ii in range(2)] >>> print(np.linalg.norm(errs)) 4.0492695830155885e-15
- left_orthogonalize_tt_cores(return_variation_cores: bool = False, use_jax: bool = False)#
Left orthogonalize the TT cores, possibly returning variation cores as well.
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.left_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) 2.9839379127106095e-12 >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('iaj,iak->jk',G,G)-np.eye(G.shape[2]))) 1.3526950544911367e-16
Stacked:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.left_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) [[1.46128743e-12 1.25202737e-12 5.60494449e-13] [9.77331695e-13 2.50200307e-12 3.07559340e-12]] >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('xyiaj,xyiak->xyjk',G,G)-np.eye(G.shape[-1]))) 9.02970295614302e-16
- right_orthogonalize_tt_cores(return_variation_cores: bool = False, use_jax: bool = False)#
Right orthogonalize the TT cores, possibly returning variation cores as well.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.right_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) 2.9839379127106095e-12 >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('iaj,kaj->jk',G,G)-np.eye(G.shape[0]))) 1.3526950544911367e-16
Stacked:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.left_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) [[1.33512640e-12 1.84518324e-12 6.79235325e-13] [1.34334400e-12 3.38154895e-12 2.93760867e-12]] >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('xyiaj,xyiak->xyjk',G,G)-np.eye(G.shape[-1]))) 1.3585381944466237e-15
- get_entries(index: t3toolbox.backend.common.NDArray, use_jax: bool = False) t3toolbox.backend.common.NDArray#
Compute an entry (or multiple entries) of a Tucker tensor train.
See Also:#
t3_apply
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> index = [[9,8], [4,10], [7,13]] # get entries (9,4,7) and (8,10,13) >>> entries = x.get_entries(index) >>> x_dense = x.to_dense(x) >>> entries2 = np.array([x_dense[9, 4, 7], x_dense[8, 10, 13]]) >>> print(np.linalg.norm(entries - entries2)) 1.7763568394002505e-15
- t3_apply(vecs: t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray], use_jax: bool = False) t3toolbox.backend.common.NDArray#
Contract a Tucker tensor train with vectors in all indices.
See also
t3_get_entriesExamples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> vecs = [np.random.randn(3,14), np.random.randn(3,15), np.random.randn(3,16)] >>> result = x.t3_apply(vecs) >>> result2 = np.einsum('ijk,ni,nj,nk->n', x.to_dense(), vecs[0], vecs[1], vecs[2]) >>> print(np.linalg.norm(result - result2)) 3.1271953680324864e-12
- probe(ww: t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray], use_jax: bool = False) t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray]#
Probe a TuckerTensorTrain.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.backend.probing as probing >>> x = t3.t3_corewise_randn((10,11,12),(5,6,4),(2,3,4,2)) >>> ww = (np.random.randn(10), np.random.randn(11), np.random.randn(12)) >>> zz = x.probe(ww) >>> x_dense = x.to_dense() >>> zz2 = probing.probe_dense(ww, x_dense) >>> print([np.linalg.norm(z - z2) for z, z2 in zip(zz, zz2)]) [1.0259410400851746e-12, 1.0909087370186656e-12, 3.620283224238675e-13]