Koopman pipeline
Since the Koopman regression problem operates on timeseries data, it has
additional requirements that preclude the use of scikit-learn
sklearn.pipeline.Pipeline
objects:
The original state must be kept at the beginning of the lifted state.
The input-dependent lifted states must be kept at the end of the lifted state.
The number of input-independent and input-dependent lifting functions must be tracked throughout the pipeline.
Samples must not be reordered or subsampled (this would corrupt delay-based lifting functions).
Concatenated data from different training epidodes must not be mixed (even though the states are adjacent in the array, they may not be sequential in time).
To meet these requirements, each lifting function, described by the
pykoop.KoopmanLiftingFn
interface, supports a feature that indicates
which episode each sample belongs to. Furthermore, each lifting function stores
the number of input-dependent and input-independent features at its input and
output.
The data matrices provided to fit()
(as well as transform()
and inverse_transform()
) must obey the following format:
If
episode_feature
is true, the first feature must indicate which episode each timestep belongs to. The episode feature must contain positive integers only.The last
n_inputs
features must be exogenous inputs.The remaining features are considered to be states (input-independent).
Consider an example data matrix where the fit()
parameters are
episode_feature=True
and n_inputs=1
:
Episode |
State 0 |
State 1 |
Input 0 |
---|---|---|---|
0.0 |
0.1 |
-0.1 |
0.2 |
0.0 |
0.2 |
-0.2 |
0.3 |
0.0 |
0.3 |
-0.3 |
0.4 |
1.0 |
-0.1 |
0.1 |
0.3 |
1.0 |
-0.2 |
0.2 |
0.4 |
1.0 |
-0.3 |
0.3 |
0.5 |
2.0 |
0.3 |
-0.1 |
0.3 |
2.0 |
0.2 |
-0.2 |
0.4 |
In the above matrix, there are three distinct episodes with different numbers of timesteps. The last feature is an input, so the remaining two features must be states.
If n_inputs=0
, the same matrix is interpreted as:
Episode |
State 0 |
State 1 |
State 2 |
---|---|---|---|
0.0 |
0.1 |
-0.1 |
0.2 |
0.0 |
0.2 |
-0.2 |
0.3 |
0.0 |
0.3 |
-0.3 |
0.4 |
1.0 |
-0.1 |
0.1 |
0.3 |
1.0 |
-0.2 |
0.2 |
0.4 |
1.0 |
-0.3 |
0.3 |
0.5 |
2.0 |
0.3 |
-0.1 |
0.3 |
2.0 |
0.2 |
-0.2 |
0.4 |
If episode_feature=False
and the first feature is omitted, the
matrix is interpreted as:
State 0 |
State 1 |
State 2 |
---|---|---|
0.1 |
-0.1 |
0.2 |
0.2 |
-0.2 |
0.3 |
0.3 |
-0.3 |
0.4 |
-0.1 |
0.1 |
0.3 |
-0.2 |
0.2 |
0.4 |
-0.3 |
0.3 |
0.5 |
0.3 |
-0.1 |
0.3 |
0.2 |
-0.2 |
0.4 |
In the above case, each timestep is assumed to belong to the same episode.
Important
The episode feature must contain positive integers only!
Koopman regressors, which implement the interface defined in
pykoop.KoopmanRegressor
are distinct from scikit-learn
regressors
in that they support the episode feature and state tracking attributes used by
the lifting function objects. Koopman regressors also support being fit with a
single data matrix, which they will split and time-shift according to the
episode feature.
If the input is a pandas.DataFrame
, then pykoop
will store the
column names in feature_names_in_
upon fitting. This applies to both
pykoop.KoopmanLiftingFn
and pykoop.KoopmanRegressor
. If these
feature names are specified, calling get_feature_names_in()
will return
them. If they do not exist, this function will return auto-generated ones. For
instances of pykoop.KoopmanLiftingFn
, calling
get_feature_names_out()
will generate the feature names of the lifted
states. Note that pandas.DataFrame
instances are converted to
numpy.ndarray
instances as soon as they are processed by pykoop
.
You can recreate them using something like pandas.DataFrame(X_lifted,
columns=lf.get_feature_names_out())
.
The following class and function implementations are located in
pykoop.koopman_pipeline
, but have been imported into the pykoop
namespace for convenience.
|
Meta-estimator for chaining lifting functions with an estimator. |
|
Meta-estimator for lifting states and inputs separately. |
|
Combine episodes into a data matrix. |
|
Extract initial conditions from each episode. |
|
Extract input from a data matrix. |
|
Score a predicted data matrix compared to an expected data matrix. |
|
Shift episodes and truncate shifted inputs. |
|
Split a data matrix into episodes. |
|
Strip initial conditions from each episode. |
Lifting functions
All of the lifting functions included in this module adhere to the interface
defined in pykoop.KoopmanLiftingFn
.
The following class and function implementations are located in
pykoop.lifting_functions
, but have been imported into the pykoop
namespace for convenience.
Lifting function to generate bilinear products of the state and input. |
|
Lifting function that appends a constant term to the input features. |
|
|
Lifting function to generate delay coordinates for state and input. |
|
Lifting function using random kernel approximation. |
|
Lifting function to generate all monomials of the input features. |
|
Lifting function using radial basis function (RBF) features. |
|
Lifting function that wraps a |
Regressors
All of the lifting functions included in this module adhere to the interface
defined in pykoop.KoopmanRegressor
.
The following class and function implementations are located in
pykoop.regressors
, but have been imported into the pykoop
namespace for
convenience.
The pykoop.DataRegressor
regressor is a dummy regressor if you want to
force the Koopman matrix to take on a specific value (maybe you know what it
should be, or you got it from another library).
|
Dynamic Mode Decomposition. |
|
Dynamic Mode Decomposition with control. |
|
Extended Dynamic Mode Decomposition with Tikhonov regularization. |
|
Extended Dynamic Mode Decomposition with |
|
Create a |
Kernel approximation methods
The following classes are used to generate random feature maps from kernels for kernel approximation lifting functions (i.e., random Fourier feature lifting functions).
Kernel approximation with random binning. |
|
Kernel approximation with random Fourier features. |
Radial basis function centers
The following classes are used to generate centers for radial basis function (RBF) lifting functions.
|
Centers generated from a clustering algorithm. |
|
Centers taken from raw data upon instantiation or fit. |
|
Centers sampled from a Gaussian distribution. |
Centers generated from sampling a Gaussian mixture model. |
|
|
Centers generated on a uniform grid. |
|
Centers generated with Quasi-Monte Carlo sampling. |
|
Centers sampled from a uniform distribution. |
Truncated SVD
The following class and function implementations are located in
pykoop.tsvd
, but have been imported into the pykoop
namespace for
convenience.
|
Truncated singular value decomposition. |
Utilities
The following class and function implementations are located in
pykoop.util
, but have been imported into the pykoop
namespace for
convenience.
|
Preprocessor used to replace angles with their cosines and sines. |
Get example Duffing oscillator data. |
|
Get example mass-spring-damper data. |
|
Get example pendulum data. |
|
Get example Van der Pol oscillator data. |
|
|
Generate a smooth random input. |
|
Generate a random initial state. |
LMI regressors
Experimental LMI-based Koopman regressors from [DF21] and [DF22].
Warning
Importing this module has side effects! When imported, the module creates a
temporary directory with the prefix pykoop_
, which is used to memoize
long computations that may be repreated frequently. It also catches
SIGINT
so that long regressions can be stopped politely.
The following class and function implementations are located in
pykoop.lmi_regressors
, which must be imported separately.
|
LMI-based EDMD with regularization. |
LMI-based EDMD with dissipativity constraint. |
|
LMI-based EDMD with H-infinity norm regularization. |
|
LMI-based EDMD with spectral radius constraint. |
|
|
LMI-based DMDc with regularization. |
LMI-based DMDc with H-infinity norm regularization. |
|
LMI-based Dmdc with spectral radius constraint. |
|
Meta-estimator where H-infinity weight is specified in ZPK format. |
Dynamic models
The following class and function implementations are located in
pykoop.dynamic_models
, which must be imported separately.
Van der Pol oscillator. |
|
Duffing oscillator model. |
|
|
Mass-spring-damper model. |
|
Point-mass pendulum with optional damping. |
Configuration
The following functions allow the user to interact with pykoop
’s global
configuration.
Retrieve current values for configuration set by |
|
|
Set global configuration. |
|
Context manager for global configuration. |
Extending pykoop
The abstract classes from all of pykoop
’s modules have been grouped here.
If you want to write your own lifting functions or regressor, this is the place
to look!
The following abstract class implementations are spread across
pykoop.koopman_pipeline
, pykoop.dynamic_models
,
pykoop.centers
, and pykoop.lmi_regressors
. The most commonly used
ones have been imported into the pykoop
namespace.
Base class for all center generation estimators. |
|
Base class for Koopman lifting functions that are episode-dependent. |
|
Base class for Koopman lifting functions that are episode-independent. |
|
Base class for all kernel approximations. |
|
Base class for Koopman lifting functions. |
|
Base class for Koopman regressors. |
|
Continuous-time dynamic model. |
|
Discrete-time dynamic model. |
|
Base class for LMI regressors. |