t3toolbox.basis_coordinates_format.t3_orthogonal_representations#

t3toolbox.basis_coordinates_format.t3_orthogonal_representations(x: t3toolbox.tucker_tensor_train.TuckerTensorTrain, already_left_orthogonal: bool = False, squash: bool = True, use_jax: bool = False) t3toolbox.backend.common.typ.Tuple[T3Basis, T3Coordinates]#

Construct base-variation representations of TuckerTensorTrain with orthogonal base.

Input TuckerTensorTrain:

          1 -- G0 -- G1 -- G2 -- G3 -- 1
X    =         |     |     |     |
               B0    B1    B2    B3
               |     |     |     |

Base-variation representation with non-orthogonal TT-backend H1:

          1 -- L0 -- H1 -- R2 -- R3 -- 1
X    =         |     |     |     |
               U0    U1    U2    U3
               |     |     |     |

Base-variation representation with non-orthogonal tucker backend V2:

          1 -- L0 -- L1 -- O2 -- R3 -- 1
X    =         |     |     |     |
               U0    U1    V2    U3
               |     |     |     |
The input tensor train x is defined by:
  • x_tucker_cores = (B0, B1, B2, B3)

  • x_tt_cores = (G0, G1, G2, G3)

The “base cores” are:
  • tucker_cores = (U0,U1, U2, U3), up orthogonal

  • left_tt_cores = (L0, L1, L2), left orthogonal

  • right_tt_cores = (R1, R2, R3), right orthogonal

  • outer_tt_cores = (O0, O1, O2, O3), down orthogonal

The “variation cores” are:
  • tucker_variations = (V0, V1, V2, V3)

  • tt_variations = (H0, H1, H2, H3)

Parameters:
  • x (TuckerTensorTrain) – Input TuckerTensorTrain x = (x_tucker_cores, x_tt_cores) x_tucker_cores = (B0, …, B(d-1)) x_tt_cores = (G0, …, G(d-1))

  • xnp – Linear algebra backend. Default: np (numpy)

Returns:

  • T3Base – Orthogonal base for base-variation representations of x.

  • T3Variation – Variation for base-variation representaions of x.

Examples

>>> import numpy as np
>>> import t3toolbox.tucker_tensor_train as t3
>>> import t3toolbox.basis_coordinates_format as bcf
>>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (3,3,2,1), stack_shape=(2,3))
>>> base, coords = bcf.t3_orthogonal_representations(x) # Compute orthogonal representations
>>> up_tucker_cores, left_tt_cores, right_tt_cores, outer_tt_cores = base.data
>>> tucker_coords, tt_coords = coords.data
>>> (U0,U1,U2) = up_tucker_cores
>>> (L0,L1,L2) = left_tt_cores
>>> (R0,R1,R2) = right_tt_cores
>>> (O0,O1,O2) = outer_tt_cores
>>> (V0,V1,V2) = tucker_coords
>>> (H0,H1,H2) = tt_coords
>>> x2 = t3.TuckerTensorTrain((U0,U1,U2), (L0,H1,R2)) # representation with TT-backend variation in index 1
>>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Still represents origional tensor
4.978421562425667e-12
>>> x3 = t3.TuckerTensorTrain((U0,V1,U2), (L0,O1,R2)) # representation with tucker backend variation in index 1
>>> print(np.linalg.norm(x.to_dense() - x3.to_dense())) # Still represents origional tensor
5.4355175448533146e-12
>>> print(np.linalg.norm(np.einsum('...io,...jo', U1, U1) - np.eye(U1.shape[-2]))) # U: orthogonal
1.1915111872574236e-15
>>> print(np.linalg.norm(np.einsum('...iaj,...iak', L1, L1) - np.eye(L1.shape[-1]))) # L: left orthogonal
9.733823879665448e-16
>>> print(np.linalg.norm(np.einsum('...iaj,...kaj', R1, R1) - np.eye(R1.shape[-3]))) # R: right orthogonal
8.027553546330097e-16
>>> print(np.linalg.norm(np.einsum('...iaj,...ibj', O1, O1) - np.eye(O1.shape[-2]))) # O: outer orthogonal
1.3870474292323159e-15