This document describes how to get the postgres database ready to use and maintain it (do migrations / changes to the structure).
We also spend a few words on coding with database transactions in mind.
Finally, we’ll discuss how FlexMeasures is using Redis and redis-queues. When setting up on Windows, a guide to install the Redis-based queuing system for handling (forecasting) jobs.
Table of contents
We use postgres 12 at the moment, but any version starting with 9 probably works.
flexmeasuresfor your database and username here. You can use anything you like, of course.
flexmeasures_testfor the test database is good to keep this way, as automated tests are looking for that database / user / password.
sudo apt-get install postgresql pip install psycopg2-binary
Download postgres here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
Install and remember your
Add the lib and bin directories to your Windows path: http://bobbyong.com/blog/installing-postgresql-on-windoes/
conda install psycopg2
(Otherwise, pandas can get confused with daylight saving time.)
Luckily, many web hosters already have
timezone= 'UTC' set correctly by default,
but local postgres installations often use
In any case, check both your local installation and the server, like this:
postgres.conf file. Mine is at
You can also type
SHOW config_file; in a postgres console session (as superuser) to find the config file.
timezone setting and set it to ‘UTC’.
Then restart the postgres server.
This may in fact not be needed:
sudo /usr/sbin/adduser flexmeasures
From the terminal:
Open a console (use your Windows key and type
Proceed to create a database as the postgres superuser (using your postgres user password):
sudo -i -u postgres createdb -U postgres flexmeasures createdb -U postgres flexmeasures_test createuser --pwprompt -U postgres flexmeasures # enter your password createuser --pwprompt -U postgres flexmeasures_test # enter "flexmeasures_test" as password exit
Or, from within Postgres console:
CREATE USER flexmeasures WITH UNENCRYPTED PASSWORD 'this-is-your-secret-choice'; CREATE DATABASE flexmeasures WITH OWNER = flexmeasures; CREATE USER flexmeasures_test WITH UNENCRYPTED PASSWORD 'flexmeasures_test'; CREATE DATABASE flexmeasures_test WITH OWNER = flexmeasures_test;
Log in as the postgres superuser and connect to your newly-created database:
sudo -u postgres psql
Add the following extensions while logged in as the postgres superuser:
CREATE EXTENSION cube; CREATE EXTENSION earthdistance;
Connect to the
flexmeasures_test database and repeat creating these extensions there. Then
Finally, try logging in as the flexmeasures user once:
psql -U flexmeasures --password -h 127.0.0.1 -d flexmeasures
SQLALCHEMY_DATABASE_URI = "postgresql://flexmeasures:<password>@127.0.0.1/flexmeasures"
into the config file you are using, e.g. ~/flexmeasures.cfg
You need data to enjoy the benefits of FlexMeasures or to develop features for it. In this section, there are some ways to get started.
Import from another database¶
Here is a short recipe to import data from a FlexMeasures database (e.g. a demo database) into your local system.
On the to-be-exported database:
flexmeasures db-ops dump
Only the data gets dumped here.
Then, we create the structure in our database anew, based on the data model given by the local codebase:
flexmeasures db-ops reset
Then we import the data dump we made earlier:
flask db-ops restore <DATABASE DUMP FILENAME>
alembic_version error should not prevent other data tables from being restored.
You can also choose to import a complete db dump into a freshly created database, of course.
To make sure passwords will be decrypted correctly when you authenticate, set the same SECURITY_PASSWORD_SALT value in your config as the one that was in use when the dumped passwords were encrypted!
Create data manually¶
First, you can get the database structure with:
flexmeasures db upgrade
If you develop code (and might want to make changes to the data model), you should also check out the maintenance section about database migrations.
You can create users with the
new-user command. Check it out:
flexmeasures add user --help
You can create some pre-determined asset types and data sources with this command:
flexmeasures add structure
We should instead offer CLI commands to be able to create asset types as needed.
You can create assets in the FlexMeasures UI.
Maybe a CLI command would help to script all data creation.
We still need a decent way to load in metering data, e.g. from CSV - often, a custom loading script will be necessary anyways.
You can create forecasts for your existing metered data with this command:
flexmeasures add forecasts
Check out it’s
--help content to learn more. You can set which assets and which time window you want to forecast. Of course, making forecasts takes a while for a larger dataset.
You can also simply queue a job with this command (and run a worker to process the Redis queue).
Just to note, there are also commands to get rid of data, such as:
flexmeasures delete structure flexmeasures delete measurements flexmeasures delete forecasts
Check out the CLI Commands documentation for more details.
You can visualise the data model like this:
This will generate a picture based on the model code. You can also generate picture based on the actual database, see inside the Makefile.
Maintenance is supported with the alembic tool. It reacts automatically to almost all changes in the SQLAlchemy code. With alembic, multiple databases, such as development, staging and production databases can be kept in sync.
Run these commands from the repository root directory (read below comments first):
flexmeasures db init flexmeasures db migrate flexmeasures db upgrade
The first command (
flexmeasures db init) is only needed here once, it initialises the alembic migration tool.
The second command generates the SQL for your current db model and the third actually gives you the db structure.
With every migration, you get a new migration step in
migrations/versions. Be sure to add that to
as future calls to
flexmeasures db upgrade will need those steps, and they might happen on another computer.
Hint: You can edit these migrations steps, if you want.
Just to be clear that the
db init command is needed only at the beginning - you usually do, if your model changed:
flexmeasures db migrate --message "Please explain what you did, it helps for later" flexmeasures db upgrade
The goal is that on any other computer, you can always execute
flexmeasures db upgrade
to have the database structure up-to-date with all migrations.
The history of migrations is at your fingertips:
flexmeasures db current flexmeasures db history
You can move back and forth through the history:
flexmeasures db downgrade flexmeasures db upgrade
Both of these accept a specific revision id parameter, as well.
It is really useful (and therefore an industry standard) to bundle certain database actions within a transaction. Transactions are atomic - either the actions in them all run or the transaction gets rolled back. This keeps the database in a sane state and really helps having expectations during debugging.
Please see the package
flexmeasures.data.transactional for details on how a FlexMeasures developer should make use of this concept.
If you are writing a script or a view, you will find there the necessary structural help to bundle your work in a transaction.
FlexMeasures supports jobs (e.g. forecasting) running asynchronously to the main FlexMeasures application using Redis Queue.
It relies on a Redis server, which is has to be installed locally, or used on a separate host. In the latter case, configure Redis details in your FlexMeasures config file.
Forecasting jobs are usually created (and enqueued) when new data comes in via the API. To asynchronously work on these forecasting jobs, run this in a console:
flexmeasures jobs run_worker --queue forecasting
You should be able to run multiple workers in parallel, if necessary. You can add the
--name argument to keep them a bit more organized.
The FlexMeasures unit tests use fakeredis to simulate this task queueing, with no configuration required.
The first option to inspect the state of the
forecasting queue should be via the formidable RQ dashboard. If you have admin rights, you can access it at
your-flexmeasures-url/rq/, so for instance
http://localhost:5000/rq/. You can also start RQ dashboard yourself (but you need to know the redis server credentials):
pip install rq-dashboard rq-dashboard --redis-host my.ip.addr.ess --redis-password secret --redis-database 0
RQ dashboard shows you ongoing and failed jobs, and you can see the error messages of the latter, which is very useful.
Finally, you can also inspect the queue and jobs via a console (see the nice RQ documentation), which is more powerful. Here is an example of inspecting the finished jobs and their results:
from redis import Redis from rq import Queue from rq.job import Job from rq.registry import FinishedJobRegistry r = Redis("my.ip.addr.ess", port=6379, password="secret", db=2) q = Queue("forecasting", connection=r) finished = FinishedJobRegistry(queue=q) finished_job_ids = finished.get_job_ids() print("%d jobs finished successfully." % len(finished_job_ids)) job1 = Job.fetch(finished_job_ids, connection=r) print("Result of job %s: %s" % (job1.id, job1.result))
On Unix, the rq system is automatically set up as part of FlexMeasures’s main setup (the
However, rq is not functional on Windows without the Windows Subsystem for Linux.
On these versions of Windows, FlexMeasures’s queuing system uses an extension of Redis Queue called
This is also an automatically installed dependency of FlexMeasures.
However, the Redis server needs to be set up separately. Redis itself does not work on Windows, so it might be easiest to commission a Redis server in the cloud (e.g. on kamatera.com).
If you want to install Redis on Windows itself, it can be set up on a virtual machine as follows:
Download the vagrant-redis vagrant configuration
vagrant-redis.zipin any folder, e.g. in
config.vm.box = "hashicorp/precise64"in the Vagrantfile, and remove the line with
vagrant upin Command Prompt