t3toolbox.tucker_tensor_train.t3_inner_product#
- t3toolbox.tucker_tensor_train.t3_inner_product(x: t3toolbox.backend.common.typ.Union[TuckerTensorTrain, t3toolbox.backend.common.NDArray], y: t3toolbox.backend.common.typ.Union[TuckerTensorTrain, t3toolbox.backend.common.NDArray], use_orthogonalization: bool = True, use_jax: bool = False)#
Compute Hilbert-Schmidt inner product of two Tucker tensor trains.
The Hilbert-Schmidt inner product is defined with respect to the dense N0 x … x N(d-1) tensors that are represented by the Tucker tensor trains, even though these dense tensors are not formed during computations.
For corewise dot product, see
t3toolbox.corewise.corewise_dot()- Parameters:
x (TuckerTensorTrain) – First Tucker tensor train. shape=(N0,…,N(d-1))
y (TuckerTensorTrain) – Second Tucker tensor train. shape=(N0,…,N(d-1))
xnp – Linear algebra backend. Default: np (numpy)
- Returns:
Hilbert-Schmidt inner product of Tucker tensor trains, (x, y)_HS.
- Return type:
scalar
- Raises:
ValueError –
Error raised if either of the TuckerTensorTrains are internally inconsistent
Error raised if the TuckerTensorTrains have different shapes.
See also
TuckerTensorTrain,t3_shape,t3_add,t3_scale,corewise_dot()Notes
Algorithm contracts the TuckerTensorTrains in a zippering fashion from left to right.
Examples
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (1,3,2,1)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (1,5,6,1)) >>> x_dot_y = t3.t3_inner_product(x, y) >>> x_dot_y2 = np.sum(x.to_dense() * y.to_dense()) >>> print(np.linalg.norm(x_dot_y - x_dot_y2)) 8.731149137020111e-11
Example where leading and trailing TT-ranks are not 1:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (3,5,6,3)) >>> x_dot_y = t3.t3_inner_product(x, y) >>> x_dot_y2 = np.sum(x.to_dense() * y.to_dense()) >>> print(np.linalg.norm(x_dot_y - x_dot_y2)) 1.3096723705530167e-10
(T3, T3) using stacking:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = t3.t3_corewise_randn((14,15,16), (4,5,6), (2,3,2,2), stack_shape=(2,3)) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (3,5,6,3), stack_shape=(2,3)) >>> x_dot_y = t3.t3_inner_product(x, y) >>> x_dot_y2 = np.sum(x.to_dense() * y.to_dense(), axis=(2,3,4)) >>> print(np.linalg.norm(x_dot_y - x_dot_y2)) 2.7761383858792984e-09
Inner product of T3 with dense:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = np.random.randn(14,15,16) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (3,5,6,3)) >>> x_dot_y = t3.t3_inner_product(x, y) >>> x_dot_y2 = np.sum(x * y.to_dense()) >>> print(np.linalg.norm(x_dot_y - x_dot_y2)) 0.0
Inner product of T3 with dense including stacking:
>>> import numpy as np >>> import t3toolbox.tucker_tensor_train as t3 >>> x = np.random.randn(2,3, 14,15,16) >>> y = t3.t3_corewise_randn((14,15,16), (3,7,2), (3,5,6,3), stack_shape=(2,3)) >>> x_dot_y = t3.t3_inner_product(x, y) >>> x_dot_y2 = np.einsum('ijxyz,ijxyz->ij', x, y.to_dense()) >>> print(np.linalg.norm(x_dot_y - x_dot_y2)) 1.2014283869232628e-11