evomap.mapping._core
Contents
evomap.mapping._core
#
Core functions shared within mapping module. Mostly related to different optimization routines implemented and adjusted for mapping.
Module Contents#
Functions#
|
Gradient descent with backtracking via halving. |
|
Gradient descent with momentum. |
|
Print optimization progress. |
Attributes#
- evomap.mapping._core.EPSILON = 1e-12#
- exception evomap.mapping._core.DivergingGradientError[source]#
Bases:
Error
Raised when the input value is too small
- evomap.mapping._core.gradient_descent_line_search(objective, init, n_iter, n_iter_check=50, maxhalves=10, step_size=1, min_grad_norm=1e-07, verbose=0, method_str='', args=None, kwargs=None)[source]#
Gradient descent with backtracking via halving.
Optimizes the objective function iteratively. At each step, a halving procedure is used to ensure that step sizes are set such that cost values decrease.
- Parameters
objective (callable) – Function to be optimized. Expected to return the function value and the gradient when called. See examples for exact syntax.
init (ndarray of shape (n_samples, n_dims)) – Starting initialization.
n_iter (int) – Total number of gradient descent iterations.
n_iter_check (int, optional) – Interval in which cost values are reported, by default 1
maxhalves (int, optional) – Maximum number of halving steps in line search, by default 10
step_size (int, optional) – Initial step size, by default 1
min_grad_norm (float, optional) – Error tolerance, by default 1e-7
verbose (int, optional) – Level of verbosity, by default 0
method_str (str, optional) – Method label, by default “”
args (list, optional) – Arguments passed to the objective function, by default None
kwargs (dict, optional) – Keyword arguments passed to the objective function, by default None
- Returns
ndarray of shape (n_samples, n_dims) – Final map coordinates
float – Final cost function value
- evomap.mapping._core.gradient_descent_with_momentum(objective, init, n_iter, start_iter=0, n_iter_check=50, momentum=0.8, eta=50, min_grad_norm=1e-07, verbose=0, method_str='', args=None, kwargs=None)[source]#
Gradient descent with momentum.
Optimize the objective function using momentum-based gradient descent, as used, for instance, in t-SNE.
- Parameters
objective (callable) – Function to be optimized. Expected to return the function value and the gradient when called. See examples for exact syntax.
init (ndarray of shape (n_samples, n_dims)) – _description_
n_iter (int) – Total number of gradient descent iterations.
start_iter (int, optional) – Startint iteration, if optimization (re-)starts at a later stage , by default 0
n_iter_check (int, optional) – Interval in which cost values are reported, by default 50
momentum (float, optional) – Momentum factor, by default .8
eta (int, optional) – Learning rate, by default 50
min_grad_norm (float, optional) – Error tolerance, by default 1e-7
verbose (int, optional) – Level of verbosity, by default 0
method_str (str, optional) – Method label, by default “”
args (list, optional) – Arguments passed to the objective function, by default None
kwargs (dict, optional) – Keyword arguments passed to the objective function, by default None
- Returns
ndarray of shape (n_samples, n_dims) – Final map coordinates
float – Final cost function value
- evomap.mapping._core.report_optim_progress(iter, method_str, cost, grad_norm=None)[source]#
Print optimization progress.
- Parameters
iter (int) – Current iteration.
method_str (str) – Method label.
cost (float) – Current cost function value
grad_norm (float, optional) – Gradient norm, by default None