t3toolbox.tucker_tensor_train.TuckerTensorTrain =============================================== .. py:class:: t3toolbox.tucker_tensor_train.TuckerTensorTrain Tucker tensor train with variable ranks. Tensor network diagram for a dth order Tucker tensor train:: r0 r1 r2 r(d-1) rd 1 ------ G0 ------ G1 ------ ... ------ G(d-1) ------ 1 | | | | n0 | n1 | nd | | | B0 B1 Bd | | | | N0 | N1 | Nd | | | Attributes: ----------- tucker_cores : Tuple[NDArray] Tucker cores: (B0, ..., B(d-1)), len=d, elm_shape=VS+(ni, Ni). tt_cores : Tuple[NDArray] Tensor train cores: (G0, ..., G(d-1)), len=d, elm_shape=VS+(ri, ni, r(i+1)). d: int Number of indices of the tensor stack_shape: typ.Tuple[int, ...] The stack shape, VS. Non-empty if this object stores many different Tucker tensor trains with the same structure. Shape of the leading parts of tucker_cores[ii].shape and tt_cores[ii].shape. shape: typ.Tuple[int,...] Tensor shape: (N0, N1, ..., N(d-1)) tucker_ranks: typ.Tuple[int,...] Tucker ranks: (n0, r1, ..., n(d-1)) tt_ranks: typ.Tuple[int, ...] TT ranks: (r0, r1, ..., rd) structure: typ.Tuple[typ.Tuple[int,...], typ.Tuple[int,...], typ.Tuple[int,...]] Structure of the Tucker tensor train: (shape, tucker_ranks, tt_ranks) data: typ.Tuple[Tuple[NDArray], Tuple[NDArray]] The cores defining the Tucker tensor train minimal_ranks: typ.Tuple[typ.Tuple[int,...], typ.Tuple[int, ...]] Tucker and tensor train ranks of the smallest possible Tucker tensor train that represents the same tensor. Tucker tensor trains may be made to have minimal ranks using T3-SVD. has_minimal_ranks: bool True if this Tucker tensor train's ranks equal the minimal ranks, False otherwise. Notes: ------ The structure of a Tucker tensor train is defined by: - Tensor shape: (N0, N1, ..., N(d-1)) - Tucker ranks: (n0, r1, ..., n(d-1)) - TT ranks: (r0, r1, ..., rd) Typically, the first and last TT-ranks satisfy r0=rd=1, and "1" in the diagram is the number 1. However, it is allowed for these ranks to not be 1, in which case the "1"s in the diagram are vectors of ones. Many stacked Tucker tensor trains with the same structure may be stored in this object for vectorization. In this case, - tucker_cores[ii].shape = stack_shape + (ni,Ni) - tt_cores[ii].shape = stack_shape + (ri, ni, r(i+1)) .. seealso:: .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> tucker_cores = (np.ones((4,14)),np.ones((5,15)),np.ones((6,16))) >>> tt_cores = (np.ones((1,4,3)), np.ones((3,5,2)), np.ones((2,6,1))) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) # TuckerTensorTrain, cores filled with ones >>> print(x.d) 3 >>> print(x.shape) (14, 15, 16) >>> print(x.tucker_ranks) (4, 5, 6) >>> print(x.tt_ranks) (1, 3, 2, 1) >>> print(x.uniform_structure) ((14, 15, 16), (4, 5, 6), (1, 3, 2, 1), ()) Example with stacking: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> tucker_cores = [np.ones((6,7, 4,14)),np.ones((6,7, 5,15)),np.ones((6,7, 6,16))] >>> tt_cores = [np.ones((6,7, 1,4,3)), np.ones((6,7, 3,5,2)), np.ones((6,7, 2,6,1))] >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) # TuckerTensorTrain, cores filled with ones >>> print(x.uniform_structure) ((14, 15, 16), (4, 5, 6), (1, 3, 2, 1), (6, 7)) Minimal ranks >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((13,14,15,16), (4,5,6,7), (1,4,9,7,1)) >>> print(x.has_minimal_ranks) True Using T3-SVD to make equivalent T3 with minimal ranks: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.t3svd as t3svd >>> x = t3.t3_corewise_randn((13,14,15,16), (4,5,6,7), (1,99,9,7,1)) >>> print(x.has_minimal_ranks) False >>> x2 = t3svd.t3svd(x)[0] >>> print(x2.has_minimal_ranks) True .. py:attribute:: tucker_cores :type: t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis] .. py:attribute:: tt_cores :type: t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis] .. py:method:: data() -> t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis], t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.NDArray, Ellipsis]] .. py:method:: d() -> int .. py:method:: is_empty() -> bool .. py:method:: stack_shape() -> t3toolbox.backend.common.typ.Tuple[int, Ellipsis] If this object contains multiple stacked T3s with the same structure, this is the shape of the stack. .. py:method:: shape() -> t3toolbox.backend.common.typ.Tuple[int, Ellipsis] .. py:method:: tucker_ranks() -> t3toolbox.backend.common.typ.Tuple[int, Ellipsis] .. py:method:: tt_ranks() -> t3toolbox.backend.common.typ.Tuple[int, Ellipsis] .. py:method:: structure() -> t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis]] .. py:method:: core_shapes() -> t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], Ellipsis], t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], Ellipsis]] .. py:method:: size() -> int .. py:method:: minimal_ranks() -> t3toolbox.backend.common.typ.Tuple[t3toolbox.backend.common.typ.Tuple[int, Ellipsis], t3toolbox.backend.common.typ.Tuple[int, Ellipsis]] .. py:method:: has_minimal_ranks() -> bool .. py:method:: validate() Check internal consistency of the Tucker tensor train. .. py:method:: __post_init__() .. py:method:: to_dense(squash_tails: bool = True, use_jax: bool = False) -> t3toolbox.backend.common.NDArray Contract a Tucker tensor train to a dense tensor. :param x: Tucker tensor train which will be contracted to a dense tensor. :type x: TuckerTensorTrain :param squash_tails: Whether to contract the leading and trailing 1s with the first and last TT indices. :type squash_tails: bool, defaults to True :param use_jax: Whether to use Jax for linear algebra. Default: False (use numpy). :type use_jax: bool, defaults to False :returns: **dense_x** -- Dense tensor represented by x, which has shape (N0, ..., N(d-1)) if squash_tails=True, or (r0,N0,...,N(d-1),rd) if squash_tails=False. :rtype: NDArray .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(4,14),randn(5,15),randn(6,16)) >>> tt_cores = (randn(2,4,3), randn(3,5,2), randn(2,6,5)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> x_dense = x.to_dense() # Convert TuckerTensorTrain to dense tensor >>> ((B0,B1,B2), (G0,G1,G2)) = tucker_cores, tt_cores >>> x_dense2 = np.einsum('xi,yj,zk,axb,byc,czd->ijk', B0, B1, B2, G0, G1, G2) >>> print(np.linalg.norm(x_dense - x_dense2) / np.linalg.norm(x_dense)) 7.48952547844518e-16 Example where leading and trailing ones are not contracted >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(4,14),randn(5,15),randn(6,16)) >>> tt_cores = (randn(2,4,3), randn(3,5,2), randn(2,6,2)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> x_dense = x.to_dense(squash_tails=False) # Convert TuckerTensorTrain to dense tensor >>> print(x_dense.shape) (2, 14, 15, 16, 2) >>> ((B0,B1,B2), (G0,G1,G2)) = tucker_cores, tt_cores >>> x_dense2 = np.einsum('xi,yj,zk,axb,byc,czd->aijkd', B0, B1, B2, G0, G1, G2) >>> print(np.linalg.norm(x_dense - x_dense2) / np.linalg.norm(x_dense)) 1.1217675019342066e-15 Example with stacking >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(2,3, 4,10), randn(2,3, 5,11), randn(2,3, 6,12)) >>> tt_cores = (randn(2,3, 2,4,3), randn(2,3, 3,5,2), randn(2,3, 2,6,5)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> x_dense = x.to_dense() # Convert TuckerTensorTrain to dense tensor >>> ((B0,B1,B2), (G0,G1,G2)) = tucker_cores, tt_cores >>> x_dense2 = np.einsum('uvxi,uvyj,uvzk,uvaxb,uvbyc,uvczd->uvijk', B0, B1, B2, G0, G1, G2) >>> print(np.linalg.norm(x_dense - x_dense2) / np.linalg.norm(x_dense)) 1.3614138244072514e-15 .. py:method:: squash_tails(use_jax: bool = False) -> TuckerTensorTrain Make leading and trailing TT ranks equal to 1 (r0=rd=1), without changing tensor being represented. :param x: Tucker tensor train with tt_ranks=(r0,r1,...,r(d-1),rd). :type x: TuckerTensorTrain :param use_jax: Whether to use Jax for linear algebra. Default: False (use numpy). :type use_jax: bool, defaults to False :returns: * **squashed_x** (*TuckerTensorTrain*) -- Tucker tensor train with tt_ranks=(1,r1,...,r(d-1),1). * *See Also* * *---------* * *TuckerTensorTrain* * *T3Structure* .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(2,3, 4,10), randn(2,3, 5,11), randn(2,3, 6,12)) >>> tt_cores = (randn(2,3, 2,4,3), randn(2,3, 3,5,2), randn(2,3, 2,6,5)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> print(x.tt_ranks) (2, 3, 2, 5) >>> x2 = x.squash_tails() >>> print(x2.tt_ranks) (1, 3, 2, 1) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) 5.805155892491438e-12 .. py:method:: reverse() -> TuckerTensorTrain Reverse Tucker tensor train. :param x: Tucker tensor train with: shape=(N0, ..., N(d-1)), tucker_ranks=(n0,...,n(d-1)), tt_ranks=(1,r1,...,r(d-1),1). :type x: TuckerTensorTrain :returns: **reversed_x** -- Tucker tensor train with index order reversed. shape=(N(d-1), ..., N0), tucker_ranks=(n(d-1),...,n0), tt_ranks=(1,r(d-1),...,r1,1). :rtype: TuckerTensorTrain .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> randn = np.random.randn >>> tucker_cores = (randn(2,3, 4,10), randn(2,3, 5,11), randn(2,3, 6,12)) >>> tt_cores = (randn(2,3, 1,4,2), randn(2,3, 2,5,3), randn(2,3, 3,6,4)) >>> x = t3.TuckerTensorTrain(tucker_cores, tt_cores) >>> print(x.uniform_structure) ((10, 11, 12), (4, 5, 6), (1, 2, 3, 4), (2,3)) >>> reversed_x = x.reverse() >>> print(reversed_x.uniform_structure) ((12, 11, 10), (6, 5, 4), (4, 3, 2, 1), (2,3)) >>> x_dense = x.to_dense() >>> reversed_x_dense = reversed_x.to_dense() >>> x_dense2 = reversed_x_dense.transpose([0,1, 4,3,2]) >>> print(np.linalg.norm(x_dense - x_dense2)) 1.859018050214056e-13 .. py:method:: change_structure(new_shape: t3toolbox.backend.common.typ.Sequence[int], new_tucker_ranks: t3toolbox.backend.common.typ.Sequence[int], new_tt_ranks: t3toolbox.backend.common.typ.Sequence[int], use_jax: bool = False) -> TuckerTensorTrain Increase Tucker tensor train ranks and/or shape via zero padding. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,6,5), (1,3,2,1)) >>> padded_x = x.change_structure((17,18,17), (8,8,8), (1,5,6,1)) >>> print(padded_x.uniform_structure) ((17, 18, 17), (8, 8, 8), (1, 5, 6, 1), ()) Example where first and last ranks are nonzero: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,6,5), (3,3,2,4)) >>> padded_x = x.change_structure((17,18,17), (8,8,8), (5,5,6,7)) >>> print(padded_x.uniform_structure) ((17, 18, 17), (8, 8, 8), (5, 5, 6, 7), ()) .. py:method:: sum_stack(use_jax: bool = False) -> TuckerTensorTrain If this object contains multiple stacked T3s, this sums them. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.corewise as cw >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_sum = x.sum_stack() >>> tucker_sum = tuple([np.sum(B, axis=(0,1)) for B in x.tucker_cores]) >>> tt_sum = tuple([np.sum(G, axis=(0,1)) for G in x.tt_cores]) >>> x_sum2 = t3.TuckerTensorTrain(tucker_sum, tt_sum) >>> print(cw.corewise_norm(cw.corewise_sub(x_sum.data, x_sum2.data))) 0.0 .. py:method:: unstack() If this object contains multiple stacked T3s, this unstacks them into an array-like structure of nested tuples with the same "shape" as self.stack_shape. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(3,5)) >>> unstacked_x = x.unstack() >>> print([len(s) for s in unstacked_x]) [5, 5, 5] >>> tucker13 = tuple([B[1,3] for B in x.tucker_cores]) >>> tt13 = tuple([G[1,3] for G in x.tt_cores]) >>> x13 = t3.TuckerTensorTrain(tucker13, tt13) >>> print((x13 - unstacked_x[1][3]).norm()) 0.0 .. py:method:: __add__(other: TuckerTensorTrain, squash: bool = True, use_jax: bool = False) -> TuckerTensorTrain Add Tucker tensor trains x and y, yielding a Tucker tensor train x+y with summed ranks. dunder version of :py:meth:`TuckerTensorTrain.add`. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (1,5,6,1)) >>> z = x + y >>> print(z.uniform_structure) ((14, 15, 16), (7, 12, 8), (1, 8, 8, 1), ()) >>> print(np.linalg.norm(x.to_dense() + y.to_dense() - z.to_dense())) 6.524094086845177e-13 T3 + dense >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = np.random.randn(14,15,16) >>> z = x + y >>> print(type(z)) >>> print(np.linalg.norm(x.to_dense() + y - z)) 0.0 .. py:method:: __mul__(s) -> TuckerTensorTrain Multipy a Tucker tensor train by a scaling factor. Scaling is defined with respect to the dense N0 x ... x N(d-1) tensor that is *represented* by the Tucker tensor trains, even though this dense tensor is not formed during computations. For corewise scaling, see :func:`t3toolbox.corewise.corewise_scale` :param x: Tucker tensor train :type x: TuckerTensorTrain :param s: scaling factor :type s: scalar :returns: Scaled TuckerTensorTrain s*x, with the same structure as x. :rtype: TuckerTensorTrain :raises ValueError: - Error raised if the TuckerTensorTrains are internally inconsistent .. seealso:: :py:obj:`TuckerTensorTrain`, :py:obj:`t3_add`, :py:obj:`t3_neg`, :py:obj:`t3_sub`, :func:`~t3toolbox.corewise.corewise_scale` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> s = 3.2 >>> sx = x * s >>> print(np.linalg.norm(s*x.to_dense() - sx.to_dense())) 1.6268482531988893e-13 .. py:method:: __neg__() -> TuckerTensorTrain Scale a Tucker tensor train by -1. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> neg_x = -x >>> print(np.linalg.norm(x.to_dense() + neg_x.to_dense())) 0.0 .. py:method:: __sub__(other: TuckerTensorTrain, squash: bool = True, use_jax: bool = False) -> TuckerTensorTrain Subtract Tucker tensor trains, x - y, yielding a Tucker tensor train with summed ranks. Subtraction is defined with respect to the dense N0 x ... x N(d-1) tensors that are *represented* by the Tucker tensor trains, even though these dense tensors are not formed during computations. For corewise subtraction, see :func:`t3toolbox.corewise.corewise_sub` :param x: First summand. structure=((N0,...,N(d-1)), (n1,...,nd), (r0, r1,...,rd)) :type x: TuckerTensorTrain :param y: Second summand. structure=((N0,...,N(d-1)), (m1,...,md), (q0, q1,...,qd)) :type y: TuckerTensorTrain :param squash: Squash the first and last TT cores so that r0=rd=1 in the result. Default: True. :type squash: bool :param xnp: Linear algebra backend. Default: np (numpy) :returns: Difference of Tucker tensor trains, x-y. - shape=(N0,...,N(d-1), - tucker_ranks=(n0+m0,...,n(d-1)+m(d-1), - TT ranks=(1, r1+q1,...,r(d-1)+q(d-1),1)) if squash=True, or (r0+q0, r1+q1,...,r(d-1)+q(d-1),rd+qd)) if squash=False. :rtype: TuckerTensorTrain :raises ValueError: - Error raised if either of the TuckerTensorTrains are internally inconsistent - Error raised if the TuckerTensorTrains have different shapes. .. seealso:: :py:obj:`TuckerTensorTrain`, :py:obj:`t3_shape`, :py:obj:`t3_add`, :py:obj:`t3_scale`, :py:obj:`t3_neg`, :func:`~t3toolbox.corewise.corewise_neg` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (1,5,6,1)) >>> x_minus_y = x - y >>> print(x_minus_y.uniform_structure) ((14, 15, 16), (7, 12, 8), (2, 8, 8, 2), ()) >>> print(np.linalg.norm(x.to_dense() - y.to_dense() - x_minus_y.to_dense())) 3.5875705233607603e-13 .. py:method:: norm(use_orthogonalization: bool = True, use_jax: bool = False) Compute Hilbert-Schmidt (Frobenius) norm of a Tucker tensor train. The Hilbert-Schmidt norm is defined with respect to the dense N0 x ... x N(d-1) tensor that is *represented* by the Tucker tensor trains, even though this dense tensor is not formed during computations. For corewise norm, see :func:`t3toolbox.corewise.corewise_norm` :param x: First Tucker tensor train. shape=(N0,...,N(d-1)) :type x: TuckerTensorTrain :param xnp: Linear algebra backend. Default: np (numpy) :returns: Hilbert-Schmidt (Frobenius) norm of Tucker tensor trains, ||x||_HS :rtype: scalar :raises ValueError: - Error raised if the TuckerTensorTrain is internally inconsistent .. seealso:: :py:obj:`TuckerTensorTrain`, :py:obj:`t3_dot_t3`, :func:`t3toolbox.corewise.corewise_norm` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> print(x.norm() - np.linalg.norm(x.to_dense())) 9.094947017729282e-13 Stacked: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2), stack_shape=(2,3)) >>> norms_x = x.norm(use_orthogonalization=True) >>> x_dense = x.to_dense() >>> norms_x_dense = np.sqrt(np.sum(x_dense**2, axis=(-3,-2,-1))) >>> print(norms_x - norms_x_dense) [[-1.36424205e-12 -2.50111043e-12 1.36424205e-12] [ 1.59161573e-12 4.09272616e-12 2.72848411e-12]] .. py:method:: up_svd_ith_tucker_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray] Compute SVD of ith tucker backend and contract non-orthogonal factor up into the TT-backend above. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. :param ii: index of tucker backend to SVD :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (r0,r1,...r(d-1),rd)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) :returns: * **new_x** (*NDArray*) -- New TuckerTensorTrain representing the same tensor, but with ith tucker backend orthogonal. new_tt_cores[ii].shape = (ri, new_ni, r(i+1)) new_tucker_cores[ii].shape = (new_ni, Ni) new_tucker_cores[ii] @ new_tucker_cores[ii].T = identity matrix * **ss_x** (*NDArray*) -- Singular values of prior ith tucker backend. shape=(new_ni,). .. seealso:: :py:obj:`truncated_svd`, :py:obj:`left_svd_ith_tt_core`, :py:obj:`right_svd_ith_tt_core`, :py:obj:`up_svd_ith_tt_core`, :py:obj:`down_svd_ith_tt_core`, :py:obj:`t3_svd` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.up_svd_ith_tucker_core(ind) >>> print(np.linalg.norm(x.to_dense() - x.to_dense())) # Tensor unchanged 5.772851635866132e-13 >>> tucker_cores2, tt_cores2 = x2.data >>> rank = len(ss) >>> B = tucker_cores2[ind] >>> print(np.linalg.norm(B @ B.T - np.eye(rank))) # Tucker backend is orthogonal 8.456498415401757e-16 .. py:method:: left_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray] Compute SVD of ith TT-backend left unfolding and contract non-orthogonal factor into the TT-backend to the right. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. :param ii: index of TT-backend to SVD :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (1,r1,...r(d-1),1)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) :returns: * **new_x** (*NDArray*) -- New TuckerTensorTrain representing the same tensor, but with ith TT-backend orthogonal. new_tt_cores[ii].shape = (ri, ni, new_r(i+1)) new_tt_cores[ii+1].shape = (new_r(i+1), n(i+1), r(i+2)) einsum('iaj,iak->jk', new_tt_cores[ii], new_tt_cores[ii]) = identity matrix * **ss_x** (*NDArray*) -- Singular values of prior ith TT-backend left unfolding. shape=(new_r(i+1),). .. seealso:: :py:obj:`truncated_svd`, :py:obj:`left_svd_3tensor`, :py:obj:`up_svd_ith_tucker_core`, :py:obj:`right_svd_ith_tt_core`, :py:obj:`up_svd_ith_tt_core`, :py:obj:`down_svd_ith_tt_core`, :py:obj:`t3_svd` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.left_svd_ith_tt_core(ind) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.186463661974644e-13 >>> tucker_cores2, tt_cores2 = x2.data >>> G = tt_cores2[ind] >>> print(np.linalg.norm(np.einsum('iaj,iak->jk', G, G) - np.eye(G.shape[2]))) # TT-backend is left-orthogonal 4.453244025338311e-16 .. py:method:: right_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray] Compute SVD of ith TT-backend right unfolding and contract non-orthogonal factor into the TT-backend to the left. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. :param ii: index of TT-backend to SVD :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (1,r1,...r(d-1),1)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) :returns: * **new_x** (*NDArray*) -- New TuckerTensorTrain representing the same tensor, but with ith TT-backend orthogonal. new_tt_cores[ii].shape = (new_ri, ni, r(i+1)) new_tt_cores[ii-1].shape = (r(i-1), n(i-1), new_ri) einsum('iaj,kaj->ik', new_tt_cores[ii], new_tt_cores[ii]) = identity matrix * **ss_x** (*NDArray*) -- Singular values of prior ith TT-backend right unfolding. shape=(new_ri,). .. seealso:: :py:obj:`truncated_svd`, :py:obj:`left_svd_3tensor`, :py:obj:`up_svd_ith_tucker_core`, :py:obj:`left_svd_ith_tt_core`, :py:obj:`up_svd_ith_tt_core`, :py:obj:`down_svd_ith_tt_core`, :py:obj:`t3_svd` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.right_svd_ith_tt_core(ind) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.304678679078675e-13 >>> tucker_cores2, tt_cores2 = x2.data >>> G = tt_cores2[ind] >>> print(np.linalg.norm(np.einsum('iaj,kaj->ik', G, G) - np.eye(G.shape[0]))) # TT-backend is right orthogonal 4.207841813173725e-16 .. py:method:: up_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray] Compute SVD of ith TT-backend down unfolding and keep non-orthogonal factor with this backend. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. :param ii: index of TT-backend to SVD :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (1,r1,...r(d-1),1)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) :returns: * **new_x** (*NDArray*) -- New TuckerTensorTrain representing the same tensor. new_tt_cores[ii].shape = (ri, new_ni, r(i+1)) new_tucker_cores[ii].shape = (new_ni, Ni) * **ss_x** (*NDArray*) -- Singular values of prior ith TT-backend down unfolding. shape=(new_ri,). .. seealso:: :py:obj:`truncated_svd`, :py:obj:`outer_svd_3tensor`, :py:obj:`up_svd_ith_tucker_core`, :py:obj:`left_svd_ith_tt_core`, :py:obj:`right_svd_ith_tt_core`, :py:obj:`down_svd_ith_tt_core`, :py:obj:`t3_svd` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x2, ss = x.up_svd_ith_tt_core(1) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 1.002901486286745e-12 .. py:method:: down_svd_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> t3toolbox.backend.common.typ.Tuple[TuckerTensorTrain, t3toolbox.backend.common.NDArray] Compute SVD of ith TT-backend right unfolding and contract non-orthogonal factor down into the tucker backend below. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. :param ii: index of TT-backend to SVD :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (1,r1,...r(d-1),1)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) :returns: * **new_x** (*NDArray*) -- New TuckerTensorTrain representing the same tensor, but with ith TT-backend down orthogonal. new_tt_cores[ii].shape = (ri, new_ni, r(i+1)) new_tucker_cores[ii].shape = (new_ni, Ni) einsum('iaj,ibj->ab', new_tt_cores[ii], new_tt_cores[ii]) = identity matrix * **ss_x** (*NDArray*) -- Singular values of prior ith TT-backend down unfolding. shape=(new_ni,). .. seealso:: :py:obj:`truncated_svd`, :py:obj:`outer_svd_3tensor`, :py:obj:`up_svd_ith_tucker_core`, :py:obj:`left_svd_ith_tt_core`, :py:obj:`right_svd_ith_tt_core`, :py:obj:`up_svd_ith_tt_core`, :py:obj:`t3_svd` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> ind = 1 >>> x2, ss = x.down_svd_ith_tt_core(ind) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 4.367311712704942e-12 >>> tucker_cores2, tt_cores2 = x2.data >>> G = tt_cores2[ind] >>> print(np.linalg.norm(np.einsum('iaj,ibj->ab', G, G) - np.eye(G.shape[1]))) # TT-backend is down orthogonal 1.0643458053135608e-15 .. py:method:: orthogonalize_relative_to_ith_tucker_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> TuckerTensorTrain Orthogonalize all cores in the TuckerTensorTrain except for the ith tucker backend. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. Orthogonal is done relative to the ith tucker backend: - ith tucker backend is not orthogonalized - All other tucker cores are orthogonalized. - TT-cores to the left are left orthogonalized. - TT-backend directly above is outer orthogonalized. - TT-cores to the right are right orthogonalized. :param ii: index of tucker backend that is not orthogonalized :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (1,r1,...r(d-1),1)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) :returns: **new_x** -- New TuckerTensorTrain representing the same tensor, but orthogonalized relative to the ith tucker backend. :rtype: NDArray .. seealso:: :py:obj:`up_svd_ith_tucker_core`, :py:obj:`left_svd_ith_tt_core`, :py:obj:`right_svd_ith_tt_core`, :py:obj:`up_svd_ith_tt_core`, :py:obj:`down_svd_ith_tt_core` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x2 = x.orthogonalize_relative_to_ith_tucker_core(1) >>> print(np.linalg.norm(x.to_dense(x) - x2.to_dense(x2))) # Tensor unchanged 8.800032152216517e-13 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> X = np.einsum('xi,axb,byc,czd,zk->iyk', B0, G0, G1, G2, B2) # Contraction of everything except B1 >>> print(np.linalg.norm(np.einsum('iyk,iwk->yw', X, X) - np.eye(B1.shape[0]))) # Complement of B1 is orthogonal 1.7116160385376214e-15 Example where first and last TT-ranks are not 1: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> x2 = x.orthogonalize_relative_to_ith_tucker_core(0) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.152424496985265e-12 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> X = np.einsum('yj,zk,axb,byc,czd->axjkd', B1, B2, G0, G1, G2) # Contraction of everything except B0 >>> print(np.linalg.norm(np.einsum('axjkd,ayjkd->xy', X, X) - np.eye(B0.shape[0]))) # Complement of B1 is orthogonal 2.3594586449868743e-15 .. py:method:: orthogonalize_relative_to_ith_tt_core(ii: int, min_rank: int = None, max_rank: int = None, rtol: float = None, atol: float = None, use_jax: bool = False) -> TuckerTensorTrain Orthogonalize all cores in the TuckerTensorTrain except for the ith TT-backend. Stacking not supported: the truncated ranks vary based on this T3's numerical properties. Orthogonal is done relative to the ith TT-backend: - All Tucker cores are orthogonalized. - TT-cores to the left are left orthogonalized. - ith TT-backend is not orthogonalized. - TT-cores to the right are right orthogonalized. :param ii: index of TT-backend that is not orthogonalized :type ii: int :param x: The Tucker tensor train. structure=((N1,...,Nd), (n1,...,nd), (1,r1,...r(d-1),1)) :type x: TuckerTensorTrain :param min_rank: Minimum rank for truncation. :type min_rank: int :param min_rank: Maximum rank for truncation. :type min_rank: int :param rtol: Relative tolerance for truncation. :type rtol: float :param atol: Absolute tolerance for truncation. :type atol: float :param xnp: Linear algebra backend. Default: np (numpy) .. seealso:: :py:obj:`up_svd_ith_tucker_core`, :py:obj:`left_svd_ith_tt_core`, :py:obj:`right_svd_ith_tt_core`, :py:obj:`up_svd_ith_tt_core`, :py:obj:`down_svd_ith_tt_core` :returns: **new_x** -- New TuckerTensorTrain representing the same tensor, but orthogonalized relative to the ith TT-backend. :rtype: NDArray .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x2 = x.orthogonalize_relative_to_ith_tt_core(1) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 8.800032152216517e-13 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> XL = np.einsum('axb,xi -> aib', G0, B0) # Everything to the left of G1 >>> print(np.linalg.norm(np.einsum('aib,aic->bc', XL, XL) - np.eye(G1.shape[0]))) # Left subtree is left orthogonal 9.820411604510197e-16 >>> print(np.linalg.norm(np.einsum('xi,yi->xy', B1, B1) - np.eye(G1.shape[1]))) # Core below G1 is up orthogonal 2.1875310121178e-15 >>> XR = np.einsum('axb,xi->aib', G2, B2) # Everything to the right of G1 >>> print(np.linalg.norm(np.einsum('aib,cib->ac', XR, XR) - np.eye(G1.shape[2]))) # Right subtree is right orthogonal 1.180550381921849e-15 Example where first and last TT-ranks are not 1: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> x2 = x.orthogonalize_relative_to_ith_tt_core(0) >>> print(np.linalg.norm(x.to_dense() - x2.to_dense())) # Tensor unchanged 5.4708999671349535e-12 >>> ((B0, B1, B2), (G0, G1, G2)) = x2.data >>> XR = np.einsum('yi,zj,byc,czd->bijd', B1, B2, G1, G2) # Everything to the right of G0 >>> print(np.linalg.norm(np.einsum('bijd,cijd->bc', XR, XR) - np.eye(G0.shape[2]))) # Right subtree is right orthogonal 8.816596607002667e-16 .. py:method:: up_orthogonalize_tucker_cores(use_jax: bool = False) -> TuckerTensorTrain Orthogonalize Tucker cores upwards, pushing remainders onto TT cores above. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.up_orthogonalize_tucker_cores() >>> print((x - x_orth).norm()) 4.420285752780219e-12 >>> ind = 1 >>> B = x_orth.data[0][ind] >>> print(np.linalg.norm(B @ B.T - np.eye(B.shape[0]))) 1.2059032102772812e-15 Stacked: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.orthogonalization as orth >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.up_orthogonalize_tucker_cores() >>> print((x - x_orth).norm()) [[2.27267321e-12 1.92787570e-12 1.60830015e-12] [9.54262022e-13 1.45211899e-12 3.27867574e-12]] >>> ind = 1 >>> B = x_orth.data[0][ind] >>> BtB = np.einsum('abio,abjo->abij',B,B) >>> errs = [[np.linalg.norm(BtB[ii,jj] - np.eye(BtB.shape[-1])) for jj in range(3)] for ii in range(2)] >>> print(np.linalg.norm(errs)) 4.118375471407983e-15 .. py:method:: down_orthogonalize_tt_cores(use_jax: bool = False) Outer orthogonalize TT cores, pushing remainders downward onto tucker cores below. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.down_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) 1.927414448489825e-12 >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('iaj,ibj->ab',G,G)-np.eye(G.shape[1]))) 1.9491561709929213e-15 >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.down_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) [[1.65714673e-12 1.52503536e-12 2.94647811e-12] [1.56839190e-12 2.61963262e-12 8.78269349e-12]] >>> ind = 1 >>> G = x_orth.data[1][ind] >>> GdG = np.einsum('xyaib,xyajb->xyij',G,G) >>> errs = [[np.linalg.norm(GdG[ii,jj] - np.eye(GdG.shape[-1])) for jj in range(3)] for ii in range(2)] >>> print(np.linalg.norm(errs)) 4.0492695830155885e-15 .. py:method:: left_orthogonalize_tt_cores(return_variation_cores: bool = False, use_jax: bool = False) Left orthogonalize the TT cores, possibly returning variation cores as well. >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.left_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) 2.9839379127106095e-12 >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('iaj,iak->jk',G,G)-np.eye(G.shape[2]))) 1.3526950544911367e-16 Stacked: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.left_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) [[1.46128743e-12 1.25202737e-12 5.60494449e-13] [9.77331695e-13 2.50200307e-12 3.07559340e-12]] >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('xyiaj,xyiak->xyjk',G,G)-np.eye(G.shape[-1]))) 9.02970295614302e-16 .. py:method:: right_orthogonalize_tt_cores(return_variation_cores: bool = False, use_jax: bool = False) Right orthogonalize the TT cores, possibly returning variation cores as well. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> x_orth = x.right_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) 2.9839379127106095e-12 >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('iaj,kaj->jk',G,G)-np.eye(G.shape[0]))) 1.3526950544911367e-16 Stacked: >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3)) >>> x_orth = x.left_orthogonalize_tt_cores() >>> print((x - x_orth).norm()) [[1.33512640e-12 1.84518324e-12 6.79235325e-13] [1.34334400e-12 3.38154895e-12 2.93760867e-12]] >>> ind = 1 >>> G = x_orth.data[1][ind] >>> print(np.linalg.norm(np.einsum('xyiaj,xyiak->xyjk',G,G)-np.eye(G.shape[-1]))) 1.3585381944466237e-15 .. py:method:: get_entries(index: t3toolbox.backend.common.NDArray, use_jax: bool = False) -> t3toolbox.backend.common.NDArray Compute an entry (or multiple entries) of a Tucker tensor train. See Also: --------- t3_apply .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> index = [[9,8], [4,10], [7,13]] # get entries (9,4,7) and (8,10,13) >>> entries = x.get_entries(index) >>> x_dense = x.to_dense(x) >>> entries2 = np.array([x_dense[9, 4, 7], x_dense[8, 10, 13]]) >>> print(np.linalg.norm(entries - entries2)) 1.7763568394002505e-15 .. py:method:: t3_apply(vecs: t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray], use_jax: bool = False) -> t3toolbox.backend.common.NDArray Contract a Tucker tensor train with vectors in all indices. .. seealso:: :py:obj:`t3_get_entries` .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> vecs = [np.random.randn(3,14), np.random.randn(3,15), np.random.randn(3,16)] >>> result = x.t3_apply(vecs) >>> result2 = np.einsum('ijk,ni,nj,nk->n', x.to_dense(), vecs[0], vecs[1], vecs[2]) >>> print(np.linalg.norm(result - result2)) 3.1271953680324864e-12 .. py:method:: probe(ww: t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray], use_jax: bool = False) -> t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray] Probe a TuckerTensorTrain. .. rubric:: Examples >>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> import t3toolbox.backend.probing as probing >>> x = t3.t3_corewise_randn((10,11,12),(5,6,4),(2,3,4,2)) >>> ww = (np.random.randn(10), np.random.randn(11), np.random.randn(12)) >>> zz = x.probe(ww) >>> x_dense = x.to_dense() >>> zz2 = probing.probe_dense(ww, x_dense) >>> print([np.linalg.norm(z - z2) for z, z2 in zip(zz, zz2)]) [1.0259410400851746e-12, 1.0909087370186656e-12, 3.620283224238675e-13]