t3toolbox.tucker_tensor_train.t3_apply#

t3toolbox.tucker_tensor_train.t3_apply(x: TuckerTensorTrain, vecs: t3toolbox.backend.common.typ.Sequence[t3toolbox.backend.common.NDArray], use_jax: bool = False) t3toolbox.backend.common.NDArray#

Contract a Tucker tensor train with vectors in all indices.

Parameters:
  • x (TuckerTensorTrain) – Tucker tensor train. shape=(N0,…,N(d-1))

  • vecs (typ.Sequence[NDArray]) – Vectors to contract with indices of x. len=d, elm_shape=stack_shape+(Ni,)

  • xnp – Linear algebra backend. Default: np (numpy)

Returns:

Result of contracting x with the vectors in all indices. scalar if vecs elements are vectors, NDArray with shape (num_applies,) if vecs elements are matrices.

Return type:

NDArray or scalar

Raises:

ValueError – Error raised if the provided vectors in vecs are inconsistent with each other or the Tucker tensor train x.

See also

TuckerTensorTrain, t3_shape, t3_entry

Examples

Apply to one set of vectors:

>>> import numpy as np
>>> import t3toolbox.tucker_tensor_train as t3
>>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1))
>>> vecs = [np.random.randn(14), np.random.randn(15), np.random.randn(16)]
>>> result = t3.t3_apply(x, vecs) # <-- contract x with vecs in all indices
>>> result2 = np.einsum('ijk,i,j,k', x.to_dense(), vecs[0], vecs[1], vecs[2])
>>> print(np.abs(result - result2))
5.229594535194337e-12

Apply to multiple sets of vectors (vectorized):

>>> import numpy as np
>>> import t3toolbox.tucker_tensor_train as t3
>>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1))
>>> vecs = [np.random.randn(3,14), np.random.randn(3,15), np.random.randn(3,16)]
>>> result = t3.t3_apply(x, vecs)
>>> result2 = np.einsum('ijk,ni,nj,nk->n', x.to_dense(), vecs[0], vecs[1], vecs[2])
>>> print(np.linalg.norm(result - result2))
3.1271953680324864e-12

Apply to tensor sets of vectors and tensor sets of T3s (supervectorized)

>>> import numpy as np
>>> import t3toolbox.tucker_tensor_train as t3
>>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1), stack_shape=(2,3))
>>> vecs = [np.random.randn(4, 14), np.random.randn(4, 15), np.random.randn(4, 16)]
>>> result = t3.t3_apply(x, vecs)
>>> result2 = np.einsum('uvijk,xi,xj,xk->uvx', x.to_dense(), vecs[0], vecs[1], vecs[2])
>>> print(np.linalg.norm(result - result2))
3.1271953680324864e-12

First and last TT-ranks are not ones:

>>> import numpy as np
>>> import t3toolbox.tucker_tensor_train as t3
>>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,4))
>>> vecs = [np.random.randn(3,14), np.random.randn(3,15), np.random.randn(3,16)]
>>> result = t3.t3_apply(x, vecs)
>>> result2 = np.einsum('ijk,ni,nj,nk->n', x.to_dense(), vecs[0], vecs[1], vecs[2])
>>> print(np.linalg.norm(result - result2))
6.481396196459234e-12

Example using jax automatic differentiation:

>>> import numpy as np
>>> import jax
>>> import t3toolbox.tucker_tensor_train as t3
>>> jax.config.update("jax_enable_x64", True)
>>> A = t3.t3_corewise_randn((10,10,10),(5,5,5),(1,4,4,1)) # random 10x10x10 Tucker tensor train
>>> apply_A_sym = lambda u: t3.t3_apply(A, (u,u,u), use_jax=True) # symmetric apply function
>>> u0 = np.random.randn(10)
>>> Auuu0 = apply_A_sym(u0)
>>> g0 = jax.grad(apply_A_sym)(u0) # gradient using automatic differentiation
>>> du = np.random.randn(10)
>>> dAuuu = np.dot(g0, du) # derivative in direction du
>>> print(dAuuu)
766.5390335764645
>>> s = 1e-7
>>> u1 = u0 + s*du
>>> Auuu1 = apply_A_sym(u1)
>>> dAuuu_diff = (Auuu1 - Auuu0) / s # finite difference approximation
>>> print(dAuuu_diff) #ths same as dAuuu
766.5390504030256