![]() Hey to insert the same data from multiple sources with a unique constraint is typically an anti-pattern. I’m not sure what I should be looking at on the Postgres side and I definitely don’t know how I can have more than one Python app performing the same role without clashes! On the python side, I am able to configure the max number of buffered transactions to write at once. 3 python apps per table… very resource hungry!), but I am still getting deadlocks. I have tried breaking the python app into smaller components, such that each app can only write to a single table (i.e. Having read up on them a little, the general consensus seems to be that deadlocks are the result of application design flaws and so I’m open to suggestions on ways to improve on my current strategy. This creates the obvious problem of deadlocks. ![]() ![]() On the Timescale side, the tables being written to have UNIQUE constraints to ensure that the python apps cannot write the same data twice. I want to avoid gaps in the streamed data (websocket outages, app crashes, etc.) and so my current strategy is to have the Python app replicated x3, with each app receiving the same data streams and attempting to write to the same tables. I have a Python application which is also HA, receiving streaming data via websockets and then writing this data into TimescaleDB. I have a single Timescale database running in Kubernetes with High Availability (using the brilliant Timescale-Single Helm chart). This is a question about application design and fitting Postgres concepts into modern workflows.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |