You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there an existing issue for the same feature request?
I have checked the existing issues.
Is your feature request related to a problem?
No response
Describe the feature you'd like
Infinity's internal data is consists of segments and blocks, where each block is made up of a bunch of block columns. The implementation is such that each block column is persisted as a file on disk, no matter how large that file is. This can result in a large number of files in a single table. This feature request aims to solve this problem. We use a virtual filesystem serving infinity, whereas in reality several block column files exist on a single actual file, which avoids the problem of creating a large number of files and alleviates the possibility of a 'too many open' files error.
Describe implementation you've considered
No response
Documentation, adoption, use case
No response
Additional information
No response
The text was updated successfully, but these errors were encountered:
The goal of the virtual file system is to have a virtual layer where each generated block column, index file, delete file, etc. can be stored by the VFS. Through this layer, infinity can be connected to the local file system, can also be connected to the file system like s3.
Therefore, virtual file system needs to provide the following interfaces:
Open/Read/Write/Seek/Truncate/Close.
In the concrete implementation, VFS needs a metadata store: provide the mapping relationship between physical files and virtual file blocks, also provide the virtual file data contained in which virtual file blocks. For metadata reading and writing, what we see now is mainly accessed in the form of key value. Therefore, metadata storage can be considered kv store.
The size of each file block should be a fixed size, for example, 64KB. A physical file, isn't a fixed size files. But its size should be fixed in, for example, between 16 and 24MB.
With the constant creation and deletion of files, there must be a large amount of file fragments in the original file that needs to be cleaned up. Considering that s3 will be used as the actual storage, this layer of virtual file system, for the use of physical storage, should be append-only. The fragments merging and cleanup operation logs should be kept by the WAL of the database like create/delete/update/write operations of the VFS.
Is there an existing issue for the same feature request?
Is your feature request related to a problem?
No response
Describe the feature you'd like
Infinity's internal data is consists of segments and blocks, where each block is made up of a bunch of block columns. The implementation is such that each block column is persisted as a file on disk, no matter how large that file is. This can result in a large number of files in a single table. This feature request aims to solve this problem. We use a virtual filesystem serving infinity, whereas in reality several block column files exist on a single actual file, which avoids the problem of creating a large number of files and alleviates the possibility of a 'too many open' files error.
Describe implementation you've considered
No response
Documentation, adoption, use case
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: