flexmeasures.data.models.forecasting

Modules

flexmeasures.data.models.forecasting.custom_models

flexmeasures.data.models.forecasting.exceptions

flexmeasures.data.models.forecasting.model_spec_factory

flexmeasures.data.models.forecasting.model_specs

flexmeasures.data.models.forecasting.pipelines

flexmeasures.data.models.forecasting.utils

Functions

flexmeasures.data.models.forecasting.lookup_model_specs_configurator(model_search_term: str = 'linear-OLS') Callable[[...], tuple[ModelSpecs, str, str]]

This function maps a model-identifying search term to a model configurator function, which can make model meta data. Why use a string? It might be stored on RQ jobs. It might also leave more freedom, we can then map multiple terms to the same model or vice versa (e.g. when different versions exist).

Model meta data in this context means a tuple of:
  • timetomodel.ModelSpecs. To fill in those specs, a configurator should accept: - old_sensor: Union[Asset, Market, WeatherSensor], - start: datetime, # Start of forecast period - end: datetime, # End of forecast period - horizon: timedelta, # Duration between time of forecasting and time which is forecast - ex_post_horizon: timedelta = None, - custom_model_params: dict = None, # overwrite forecasting params, useful for testing or experimentation

  • a model_identifier (useful in case the model_search_term was generic, e.g. “latest”)

  • a fallback_model_search_term: a string which the forecasting machinery can use to choose

    a different model (using this mapping again) in case of failure.

So to implement a model, write such a function and decide here which search term(s) map(s) to it.

Classes

class flexmeasures.data.models.forecasting.Forecaster(config: dict | None = None, save_config=True, save_parameters=False, **kwargs)
_clean_parameters(parameters: dict) dict

Clean out DataGenerator parameters that should not be stored as DataSource attributes.

These parameters are already contained in the TimedBelief:

  • max_forecast_horizon: as the maximum belief horizon of the beliefs for a given event

  • forecast_frequency: as the spacing between unique belief times

  • probabilistic: as the cumulative_probability of each belief

  • sensor_to_save: as the sensor on which the beliefs are recorded

Other:

  • model_save_dir: used internally for the train and predict pipelines to save and load the model

  • output_path: for exporting forecasts to file, more of a developer feature

_compute(check_output_resolution=True, **kwargs) list[dict[str, Any]]

This method triggers the creation of a new forecast.

The same object can generate multiple forecasts with different start, end, resolution and belief_time values.

Parameters:

check_output_resolution – If True, checks each output for whether the event_resolution matches that of the sensor it is supposed to be recorded on.

_compute_forecast(**kwargs) list[dict[str, Any]]

Overwrite with the actual computation of your forecast.

Returns list of dictionaries, for example:
[
{

“sensor”: 501, “data”: <a BeliefsDataFrame>,

},

]

class flexmeasures.data.models.forecasting.SuppressTorchWarning(name='')

Suppress specific Torch warnings from Darts library about model availability.

filter(record)

Determine if the specified record is to be logged.

Returns True if the record should be logged, or False otherwise. If deemed appropriate, the record may be modified in-place.