flexmeasures.data.services.time_series

Functions

flexmeasures.data.services.time_series.aggregate_values(bdf_dict: dict[Any, timely_beliefs.beliefs.classes.BeliefsDataFrame]) BeliefsDataFrame
flexmeasures.data.services.time_series.collect_time_series_data(old_sensor_names: str | list[str], make_query: QueryCallType, query_window: tuple[datetime | None, datetime | None] = (None, None), belief_horizon_window: tuple[timedelta | None, timedelta | None] = (None, None), belief_time_window: tuple[datetime | None, datetime | None] = (None, None), belief_time: datetime | None = None, user_source_ids: int | list[int] = None, source_types: list[str] | None = None, exclude_source_types: list[str] | None = None, resolution: str | timedelta | None = None, sum_multiple: bool = True) tb.BeliefsDataFrame | dict[str, tb.BeliefsDataFrame]

Get time series data from one or more old sensor models and rescale and re-package it to order.

We can (lazily) look up by pickle, or load from the database. In the latter case, we are relying on time series data (power measurements and prices at this point) to have the same relevant column names (datetime, value). We require an old sensor model name of list thereof. If the time range parameters are None, they will be gotten from the session. Response is a 2D BeliefsDataFrame with the column event_value. If data from multiple assets is retrieved, the results are being summed. Or, if sum_multiple is False, the response will be a dictionary with asset names as keys, each holding a BeliefsDataFrame as its value. The response might be an empty data frame if no data exists for these assets in this time range.

flexmeasures.data.services.time_series.convert_query_window_for_demo(query_window: tuple[datetime.datetime, datetime.datetime]) tuple[datetime.datetime, datetime.datetime]
flexmeasures.data.services.time_series.drop_non_unique_ids(a: int | list[int], b: int | list[int]) list[int]

Removes all elements from B that are already in A.

flexmeasures.data.services.time_series.drop_unchanged_beliefs(bdf: BeliefsDataFrame) BeliefsDataFrame

Drop beliefs that are already stored in the database with an earlier belief time.

Also drop beliefs that are already in the data with an earlier belief time.

Quite useful function to prevent cluttering up your database with beliefs that remain unchanged over time.

flexmeasures.data.services.time_series.find_sensor_by_name(name: str)

Helper function: Find a sensor by name. TODO: make obsolete when we switched to collecting sensor data by sensor id rather than name

flexmeasures.data.services.time_series.query_time_series_data(old_sensor_names: tuple[str], make_query: QueryCallType, query_window: tuple[datetime | None, datetime | None] = (None, None), belief_horizon_window: tuple[timedelta | None, timedelta | None] = (None, None), belief_time_window: tuple[datetime | None, datetime | None] = (None, None), belief_time: datetime | None = None, user_source_ids: int | list[int] | None = None, source_types: list[str] | None = None, exclude_source_types: list[str] | None = None, resolution: str | timedelta | None = None) dict[str, tb.BeliefsDataFrame]

Run a query for time series data on the database for a tuple of assets. Here, we need to know that postgres only stores naive datetimes and we keep them as UTC. Therefore, we localize the result. Then, we resample the result, to fit the given resolution. * Returns a dictionary of asset names (as keys) and BeliefsDataFrames (as values), with each BeliefsDataFrame having an “event_value” column.

  • Note that we convert string resolutions to datetime.timedelta objects.

flexmeasures.data.services.time_series.set_bdf_source(bdf: BeliefsDataFrame, source_name: str) BeliefsDataFrame

Set the source of the BeliefsDataFrame. We do this by re-setting the index (as source probably is part of the BeliefsDataFrame multi index), setting the source, then restoring the (multi) index.