stpredictions.models.DIOKR package
Submodules
stpredictions.models.DIOKR.IOKR module
- class stpredictions.models.DIOKR.IOKR.IOKR(path_to_candidates=None)
Bases:
objectMain class implementing IOKR + OEL
- fit(X_s, Y_s, L=None, input_gamma=None, input_kernel=None, linear=False, output_kernel=None, Omega=None, oel_method=None, K_X_s=None, K_Y_s=None, verbose=0)
Fit OEE (Output Embedding Estimator = KRR) and OEL with supervised/unsupervised data
- get_vv_norm()
Returns the vv norm of the estimator fitted
- sloss(K_x, K_y)
Compute the square loss (train MSE)
stpredictions.models.DIOKR.cost module
- stpredictions.models.DIOKR.cost.sloss(Omega, K_x_tr_ba, K_y_tr_ba, K_y_ba_ba, K_y)
- stpredictions.models.DIOKR.cost.sloss_batch(Omega_block_diag, K_x_tr_te, K_y_tr_te, K_y_te_te, K_y, n_b)
stpredictions.models.DIOKR.estimator module
- class stpredictions.models.DIOKR.estimator.DIOKREstimator(kernel_input, kernel_output, lbda, linear=False, iokr=None, Omega=None, cost=None, eps=None)
Bases:
objectDIOKR Class with fitting procedure using pytorch
- fit_kernel_input(x_train, y_train, x_test, y_test, n_epochs=50, solver='sgd', batch_size_train=64, batch_size_test=None, verbose=True)
Fits the input kernel when using a learnable neural network kernel input using the method train_kernel_input at each epoch.
- objective(x_batch, y_batch)
Computes the objectif function to be minimized, sum of the cost +regularization
- predict(x_test, Y_candidates=None)
Model Prediction
- train_kernel_input(x_batch, y_batch, solver: str, t0)
One step of the gradient descent using Stochastic Gradient Descent during the fitting of the input kernel when using a learnable neural network kernel input
stpredictions.models.DIOKR.kernel module
- class stpredictions.models.DIOKR.kernel.Gaussian(gamma)
Bases:
stpredictions.models.DIOKR.kernel.Kernel- compute_gram(X, Y=None)
- class stpredictions.models.DIOKR.kernel.GaussianTani(gamma)
Bases:
stpredictions.models.DIOKR.kernel.Kernel- compute_gram(X, Y=None)
- class stpredictions.models.DIOKR.kernel.Kernel
Bases:
object
- class stpredictions.models.DIOKR.kernel.LearnableGaussian(gamma, model, optim_params)
Bases:
stpredictions.models.DIOKR.kernel.Kernel- append_test_loss(test_loss)
- append_time(time)
- append_train_loss(train_loss)
- clear_memory()
- clone_kernel()
- compute_gram(X, Y=None)
- class stpredictions.models.DIOKR.kernel.LearnableLinear(model, optim_params)
Bases:
stpredictions.models.DIOKR.kernel.Kernel- append_test_loss(test_loss)
- append_time(time)
- append_train_loss(train_loss)
- clear_memory()
- clone_kernel()
- compute_gram(X, Y=None)
- model_forward(X)
- stpredictions.models.DIOKR.kernel.gaussian_tani_kernel(X, Y=None, gamma=None)
Compute Gaussian Tanimoto Gram matrix between X and Y (or X) Parameters ———- X: torch.Tensor of shape (n_samples_1, n_features)
First input on which Gram matrix is computed
- Y: torch.Tensor of shape (n_samples_2, n_features), default None
Second input on which Gram matrix is computed. X is reused if None
- gamma: float
Gamma parameter of the kernel (see sklearn implementation)
- K: torch.Tensor of shape (n_samples_1, n_samples_2)
Gram matrix on X/Y
- stpredictions.models.DIOKR.kernel.get_anchors_gaussian_rff(dim_input, dim_rff, gamma)
- stpredictions.models.DIOKR.kernel.linear_kernel(X, Y=None)
Compute linear Gram matrix between X and Y (or X) Parameters ———- X: torch.Tensor of shape (n_samples_1, n_features)
First input on which Gram matrix is computed
- Y: torch.Tensor of shape (n_samples_2, n_features), default None
Second input on which Gram matrix is computed. X is reused if None
- K: torch.Tensor of shape (n_samples_1, n_samples_2)
Gram matrix on X/Y
- stpredictions.models.DIOKR.kernel.rbf_kernel(X, Y=None, gamma=None)
Compute rbf Gram matrix between X and Y (or X) Parameters ———- X: torch.Tensor of shape (n_samples_1, n_features)
First input on which Gram matrix is computed
- Y: torch.Tensor of shape (n_samples_2, n_features), default None
Second input on which Gram matrix is computed. X is reused if None
- gamma: float
Gamma parameter of the kernel (see sklearn implementation)
- K: torch.Tensor of shape (n_samples_1, n_samples_2)
Gram matrix on X/Y
stpredictions.models.DIOKR.net module
- class stpredictions.models.DIOKR.net.Net1(dim_inputs, dim_outputs)
Bases:
torch.nn.modules.module.Module- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_layers()
- training: bool
- class stpredictions.models.DIOKR.net.Net2(dim_inputs, dim_outputs)
Bases:
torch.nn.modules.module.Module- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_layers()
- training: bool
- class stpredictions.models.DIOKR.net.Net3(dim_inputs, dim_outputs)
Bases:
torch.nn.modules.module.Module- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_layers()
- training: bool
stpredictions.models.DIOKR.utils module
- stpredictions.models.DIOKR.utils.project_root() pathlib.Path
Returns project root folder.
Module contents
- class stpredictions.models.DIOKR.DIOKREstimator(kernel_input, kernel_output, lbda, linear=False, iokr=None, Omega=None, cost=None, eps=None)
Bases:
objectDIOKR Class with fitting procedure using pytorch
- fit_kernel_input(x_train, y_train, x_test, y_test, n_epochs=50, solver='sgd', batch_size_train=64, batch_size_test=None, verbose=True)
Fits the input kernel when using a learnable neural network kernel input using the method train_kernel_input at each epoch.
- objective(x_batch, y_batch)
Computes the objectif function to be minimized, sum of the cost +regularization
- predict(x_test, Y_candidates=None)
Model Prediction
- train_kernel_input(x_batch, y_batch, solver: str, t0)
One step of the gradient descent using Stochastic Gradient Descent during the fitting of the input kernel when using a learnable neural network kernel input
- class stpredictions.models.DIOKR.IOKR(path_to_candidates=None)
Bases:
objectMain class implementing IOKR + OEL
- fit(X_s, Y_s, L=None, input_gamma=None, input_kernel=None, linear=False, output_kernel=None, Omega=None, oel_method=None, K_X_s=None, K_Y_s=None, verbose=0)
Fit OEE (Output Embedding Estimator = KRR) and OEL with supervised/unsupervised data
- get_vv_norm()
Returns the vv norm of the estimator fitted
- sloss(K_x, K_y)
Compute the square loss (train MSE)
- class stpredictions.models.DIOKR.Net1(dim_inputs, dim_outputs)
Bases:
torch.nn.modules.module.Module- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_layers()
- training: bool
- class stpredictions.models.DIOKR.Net2(dim_inputs, dim_outputs)
Bases:
torch.nn.modules.module.Module- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_layers()
- training: bool
- class stpredictions.models.DIOKR.Net3(dim_inputs, dim_outputs)
Bases:
torch.nn.modules.module.Module- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_layers()
- training: bool