You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When creating a continuous aggregate, I frequently have to change the chunk interval for a continuous aggregate because the default of 10x the underlying hypertable is too small. I'll end up with a continuous aggregate that has very small chunks because the data density increase by more than 10.
The current best way to avoid this is to create a continuous aggregate with WITH NO DATA so no chunks are made, and then run something like this:
CREATE MATERIALIZED VIEW public.my_wonderful_agg WITH (
timescaledb.continuous,
timescaledb.materialized_only= true,
timescaledb.chunk_time_interval='24h'
) AS
...
This would allow me to 1) avoid using WITH NO DATA (though I still might for other reasons) and 2) not have to include that not-very-obvious SQL statement to change the chunk size.
Implementation challenges
No response
The text was updated successfully, but these errors were encountered:
Seems so, I'm fine with closing mine in that case, but let this serve as my emphatic +1 :)
This provides a good example and API description compared to #1775 so I suggest to not close it. Just want to make sure that we close both once we fix it.
I've been using set_chunk_time_interval() directly on caggs (instead of doing a subquery for the materialization hypertable schema and name) without getting errors, and it seems to do the job. Am I wrong?
What type of enhancement is this?
API improvement, User experience
What subsystems and features will be improved?
Continuous aggregate
What does the enhancement do?
When creating a continuous aggregate, I frequently have to change the chunk interval for a continuous aggregate because the default of 10x the underlying hypertable is too small. I'll end up with a continuous aggregate that has very small chunks because the data density increase by more than 10.
The current best way to avoid this is to create a continuous aggregate with
WITH NO DATA
so no chunks are made, and then run something like this:Instead, I'd propose something like this:
This would allow me to 1) avoid using WITH NO DATA (though I still might for other reasons) and 2) not have to include that not-very-obvious SQL statement to change the chunk size.
Implementation challenges
No response
The text was updated successfully, but these errors were encountered: