Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Database support #134

Open
a18090 opened this issue Apr 20, 2024 · 8 comments
Open

Database support #134

a18090 opened this issue Apr 20, 2024 · 8 comments

Comments

@a18090
Copy link

a18090 commented Apr 20, 2024

Many thanks for providing this project,

I encountered a very difficult problem when using it, that is, the amount of data is too large, about 600,000 images and videos, and the local database file frequently presses CTRL + C. After re-running, it will prompt me that the database is damaged.
This is a nightmare, I'm wondering if I can put the database into mysql or MariaDB so that when I rerun it it won't affect my data.
I judged that I might have been interrupted while he was writing, so this problem occurred, but I have so many images that it may take a week to process each time.

Thanks again

@xemle
Copy link
Owner

xemle commented Apr 20, 2024

Hi @a18090

Thank you for using HomeGallery and sharing your issues here. And congratulations you are the first person I know who is using 600k media files. That is incredible! I am using only 100k and a friend is using 400k. So you are leading the list!

So you report that your database is corrupt? You can rebuild your database via the cli running the importer via ./gallery.js run import. The exact call or the cli depends on your platform.

The rebuild will rescan your files, extract all meta files and create previews (and skipping it if they already exist) and is rebuilding the database. See internals of the doc for details of the building blocks.

Since I do not have experience with 600k that would be the answer for the theory. If it will work on your machine I do not know.

Regarding another database backend like mysql etc: The design decision for the database was to not using a standard RDBMS database but load the whole information into the browser to have a snappy user experience. This decision comes with a tradeoff: It can not scale endless. So I am curious how your experience is with 600k media? I would appreciate if you share this information:

  • How is your general setup (OS or docker)
  • Your main work flow (desktop or mobile)
  • How is the loading time of your database?
  • General user experience doing searches in the UI (are the result shown "fast" enough)?

Hope that helps

@a18090
Copy link
Author

a18090 commented Apr 21, 2024

Hi @xemle

1. How is your general setup (OS or docker)
I have tried docker, but there are some problems, because my network traffic will be fragmented when using wireguard in my environment, so I used OS direct deployment (binary), I used SSD for caching, and images and videos are in HDD.
Among them, I tried defining the video as h265 encoding in the conf file (trying to reduce the video file size), because my source file is about 4T, and the converted file is about 1.6T. When indexing such a large number of files, the CPU and The memory consumption is not too high, but the ffmpeg CPU usage is very high during video conversion.

2.Your main work flow (desktop or mobile)
My Android can still handle the "Year" page when the data volume exceeds 200,000. After that, I can only enter the website through "tags". On the desktop, this problem occurs when the data volume is around 400,000, and I cannot enter the "Year" page. It can be entered through "tags". I have tried Apple but it is good and bad. I can't judge.
If you directly enter the "Year" tab in the desktop environment, Chrome will occupy more than 2G of RAM and will be stuck for about 20 minutes and cannot be operated. However, it is normal to enter "tags" and load the data before entering "Year".
Android: 8G RAM + Snapdragon 8G3 + Chrome
Desktop: 16G RAM + Intel 1260P + Chrome
Apple: iPhone 13Pro + Chrome

3.How is the loading time of your database?
My database is on an SSD and loads very quickly, probably taking less than a minute before starting to process new media as my data keeps growing.

4.General user experience doing searches in the UI (are the result shown "fast" enough)?
The search can basically be processed at the second level. Most of the time, I am indifferent. For example, similar face searches and file name searches are almost similar.
I remember that my database file was about 200M when it was damaged, and the index was about 200-400M.

Regarding video conversion, I am wondering if I can try to load the source file directly if it is a LAN environment. The program can only process the exif data.
Because I found that most videos are h264 or newer, Chrome should be able to play them directly (there may be audio playback issues).

The main problems encountered can be referred to as follows:
1.
The API server may disappear suddenly, but I don't know when it crashes. I am running it in a docker environment.
"I guess this may have something to do with my concurrency being too high."
2.
When I entered the website through a non-"tags" page (I added all EXIFs containing GPS to tag: geo), I found that the map could not load all the data properly, and sometimes it might be missing.
I try to do it regularly search/latitude%20in%20[0.0000:90.0000]%20and%20longitude%20in%20[90.0000:180.0000],Then manually add tags:geo
3.
Usually I execute it directly through ./home-gallery -c home-gallery.conf, and then "start" directly.

Thanks again xemle, I don't know JavaScript, I tried AI to help me with database support, but I found it ridiculous since I only know Python programming.

@xemle
Copy link
Owner

xemle commented Apr 22, 2024

Hi @a18090

thank you for the detailed answer. Were you able to fix your database issue?

2.Your main work flow (desktop or mobile) My Android can still handle the "Year" page when the data volume exceeds 200,000. After that, I can only enter the website through "tags". On the desktop, this problem occurs when the data volume is around 400,000, and I cannot enter the "Year" page. It can be entered through "tags". I have tried Apple but it is good and bad. I can't judge. If you directly enter the "Year" tab in the desktop environment, Chrome will occupy more than 2G of RAM and will be stuck for about 20 minutes and cannot be operated. However, it is normal to enter "tags" and load the data before entering "Year". Android: 8G RAM + Snapdragon 8G3 + Chrome Desktop: 16G RAM + Intel 1260P + Chrome Apple: iPhone 13Pro + Chrome

I am surprised that the tag image works but others not. It needs some investigation why it is partially working.

Regarding video conversion, I am wondering if I can try to load the source file directly if it is a LAN environment. The program can only process the exif data. Because I found that most videos are h264 or newer, Chrome should be able to play them directly (there may be audio playback issues).

Currently serving original files such as videos are not supported even if the browser could playback the videos. This issue has been addressed several times like #96, #25. Since I develop this gallery for my needs I do have more old non native supported video formats for the browser and like more to separate the original files from the gallery files.

Maybe some day there will be a plugin system where it will be easy to extend and customize this functionality.

The main problems encountered can be referred to as follows:

  1. The API server may disappear suddenly, but I don't know when it crashes. I am running it in a docker environment. "I guess this may have something to do with my concurrency being too high."

Please try to reduce the extractor.apiServer.concurrent setting to 1 if you face problems.

  1. When I entered the website through a non-"tags" page (I added all EXIFs containing GPS to tag: geo), I found that the map could not load all the data properly, and sometimes it might be missing. I try to do it regularly search/latitude%20in%20[0.0000:90.0000]%20and%20longitude%20in%20[90.0000:180.0000],Then manually add tags:geo

There is a search term for it: exists(geo) which matches only media with geo information. Or not exists(geo) for media without geo information.

  1. Usually I execute it directly through ./home-gallery -c home-gallery.conf, and then "start" directly.

Have you tried to run ./home-gallery -c home-gallery.conf run server? It does not make a difference but it will just start the server.

Thanks again xemle, I don't know JavaScript, I tried AI to help me with database support, but I found it ridiculous since I only know Python programming.

So what is your biggest pain currently with HomeGallery? Can you work with it? Did you tried other self hosted galleries? How do they perform on you large media set? What features do you prefer in HomeGallery? Which features do you like in others?

@a18090
Copy link
Author

a18090 commented Apr 23, 2024

Hi @xemle

My database doesn't seem to have been repaired, but it's not a serious problem, I'll probably exclude the video files next time I rebuild so it works faster (since I have almost 110,000 videos) haha.

I am curious about the two entry methods of "year" and "tags", but I will try these two pages again the next time I re-import, and then I will take a look at the log and chrome responses.

The video problem is not very serious. After all, I will reduce the bit rate and preset when converting to increase the conversion speed.

I'm going to try reducing api.server.concurrent to 1 and test again.

I like HomeGallery's face search and similar image search functions. These two functions are very helpful. I will try to restart HomeGallery again in the next days. This time I may regularly back up the database file to reduce the risk.

I've tried other apps and there are some issues.
Photoprism: It is very smooth when managing a many files. The cache occupies about ~180G. The video processing method "transcode on click", but his face recognition accuracy is a big problem (For me), and his The database has a file size of 4G and I expect this will increase in the future.

I feel like there may be a database issue on several of the self-hosted galleries I've used.
When using photoprism, I reconstructed the database nearly three times before I found the solution, because when enabled with docker-compose, each Ctrl+C interruption may cause the database to be damaged at any time, and it is irreversible.
This is perhaps my biggest problem with self-hosted galleries, but I'm also trying to find various ways to solve it, such as regularly backing up the database file or minimizing the convenience of Ctrl+C.

@a18090
Copy link
Author

a18090 commented May 5, 2024

{"level":20,"time":"2024-05-04T17:36:54.060Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":41646,"msg":"Processed entry dir cache 211 (+1)"} {"level":20,"time":"2024-05-04T17:36:54.060Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1589.9 MB, heapTotal: 1067.0 MB, heapUsed: 986.9 MB, external: 42.8 MB, arrayBuffers: 40.6 MB"} {"level":20,"time":"2024-05-04T17:37:11.586Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":17526,"msg":"Processed entry dir cache 214 (+3)"} {"level":20,"time":"2024-05-04T17:37:21.964Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":10378,"msg":"Processed entry dir cache 215 (+1)"} {"level":20,"time":"2024-05-04T17:37:36.514Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":14551,"msg":"Processed entry dir cache 216 (+1)"} {"level":20,"time":"2024-05-04T17:37:36.514Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 2731.6 MB, heapTotal: 2087.8 MB, heapUsed: 2030.5 MB, external: 153.8 MB, arrayBuffers: 151.6 MB"} {"level":20,"time":"2024-05-04T17:37:56.699Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":20185,"msg":"Processed entry dir cache 217 (+1)"} {"level":20,"time":"2024-05-04T17:38:13.818Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":17119,"msg":"Processed entry dir cache 219 (+2)"} {"level":20,"time":"2024-05-04T17:38:13.818Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 3906.7 MB, heapTotal: 3241.7 MB, heapUsed: 3163.1 MB, external: 90.8 MB, arrayBuffers: 88.6 MB"} {"level":20,"time":"2024-05-04T17:38:31.090Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":17272,"msg":"Processed entry dir cache 220 (+1)"} {"level":20,"time":"2024-05-04T17:39:10.659Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":39569,"msg":"Processed entry dir cache 226 (+6)"} {"level":20,"time":"2024-05-04T17:39:10.659Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 4555.3 MB, heapTotal: 3800.8 MB, heapUsed: 3705.3 MB, external: 185.8 MB, arrayBuffers: 183.6 MB"} {"level":30,"time":"2024-05-04T17:39:10.660Z","pid":1358080,"hostname":"cdn","module":"extractor","levelName":"info","duration":144649,"msg":"Processed 146217 entries (#146217, +23441, processing 115142 and queued 62 entries)"} {"level":20,"time":"2024-05-04T17:39:35.163Z","pid":1358080,"hostname":"cdn","module":"stream.pipe","levelName":"debug","duration":24504,"msg":"Processed entry dir cache 227 (+1)"} {"level":20,"time":"2024-05-04T17:39:51.912Z","pid":1357985,"hostname":"cdn","module":"cli.spawn","levelName":"debug","spawn":{"env":{"GALLERY_LOG_LEVEL":"trace","GALLERY_LOG_JSON_FORMAT":"true"},"command":"/tmp/caxa/home-gallery/master/home-gallery/node/bin/node","args":["/tmp/caxa/home-gallery/master/home-gallery/gallery.js","extract","--index","/data/ssd/glh/config/tdl.idx"],"pid":1358080,"code":null,"signal":"SIGABRT","cmd":"GALLERY_LOG_LEVEL=trace GALLERY_LOG_JSON_FORMAT=true /tmp/caxa/home-gallery/master/home-gallery/node/bin/node /tmp/caxa/home-gallery/master/home-gallery/gallery.js extract --index /data/ssd/glh/config/tdl.idx"},"duration":29365830,"msg":"Executed cmd GALLERY_LOG_LEVEL=trace GALLERY_LOG_JSON_FORMAT=true /tmp/caxa/home-gallery/master/home-gallery/node/bin/node /tmp/caxa/home-gallery/master/home-gallery/gallery.js extract --index /data/ssd/glh/config/tdl.idx"} {"level":30,"time":"2024-05-04T17:39:51.913Z","pid":1357985,"hostname":"cdn","module":"cli.spawn","levelName":"info","msg":"Cli extract --index /data/ssd/glh/config/tdl.idx exited by signal SIGABRT"}

Hi @xemle

I was rebuilding the database and I tested directly
./gallery -c gallery.config.yml run
But the data was incomplete. The webpage only showed ~270,000 images (I excluded videos). Later I tried re-indexing the small files, but it reported an error. It seemed that the memory overflowed?

The server memory has 64G, and 18G is currently used.

@xemle
Copy link
Owner

xemle commented May 6, 2024

Hi @a18090

Thank you for your report.

From the logs I can not say a lot. The rebuild command ./gallery -c gallery.config.yml run misses the command. Are you running server or import?

I can see that the heapUsed is a lot with about 4GB (heapUsed: 3705.3 MB). To be honest by algorithm I do not know why the heap consumption is so high (also on my side). My assumption is that the memory consumption of the database creation step should not that high since the end product are only a few 100 MB but at the end it is. I was able to fix the issue by increasing the --max-old-space-size node option for the database creation. This might not be enough on your side...

You can try to set 8 GB in your gallery config for the database creation. Maybe it helps

database:
  maxMemory: 8192

To fix your side I need further investigation of node's memory management and need to check what can be improved to keep the memory consumption low.

@a18090
Copy link
Author

a18090 commented May 8, 2024

Hi @xemle

thank you for your reply,
I have configured it in the settings file "gallery.config.yml"

database:
   file: '{configDir}/database.db'
   maxMemory: 40960

It seems to be a problem during operation?

I try cat gallery.log |grep heapTotal

During operation, the cached data enters the memory directly and is not written to database.db.

{"level":20,"time":"2024-05-04T16:56:10.906Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1569.5 MB, heapTotal: 1077.9 MB, heapUsed: 1036.1 MB, external: 27.6 MB, arrayBuffers: 25.4 MB"}
{"level":20,"time":"2024-05-04T16:57:52.387Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1461.6 MB, heapTotal: 968.8 MB, heapUsed: 930.8 MB, external: 28.7 MB, arrayBuffers: 26.5 MB"}
{"level":20,"time":"2024-05-04T16:58:33.444Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1591.6 MB, heapTotal: 1098.9 MB, heapUsed: 1057.1 MB, external: 28.3 MB, arrayBuffers: 26.1 MB"}
{"level":20,"time":"2024-05-04T16:59:40.639Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1567.6 MB, heapTotal: 1074.5 MB, heapUsed: 1032.9 MB, external: 27.0 MB, arrayBuffers: 24.8 MB"}
{"level":20,"time":"2024-05-04T17:02:46.419Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1594.4 MB, heapTotal: 1100.1 MB, heapUsed: 1060.8 MB, external: 28.4 MB, arrayBuffers: 26.2 MB"}
{"level":20,"time":"2024-05-04T17:05:10.364Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1516.5 MB, heapTotal: 1021.8 MB, heapUsed: 970.3 MB, external: 20.4 MB, arrayBuffers: 18.2 MB"}
{"level":20,"time":"2024-05-04T17:09:11.229Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1533.8 MB, heapTotal: 1037.5 MB, heapUsed: 986.1 MB, external: 21.8 MB, arrayBuffers: 19.6 MB"}
{"level":20,"time":"2024-05-04T17:13:21.719Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1517.0 MB, heapTotal: 1020.7 MB, heapUsed: 974.6 MB, external: 21.3 MB, arrayBuffers: 19.1 MB"}
{"level":20,"time":"2024-05-04T17:16:34.564Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1623.2 MB, heapTotal: 1126.8 MB, heapUsed: 1073.8 MB, external: 22.5 MB, arrayBuffers: 20.3 MB"}
{"level":20,"time":"2024-05-04T17:17:10.270Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1488.8 MB, heapTotal: 991.7 MB, heapUsed: 943.5 MB, external: 20.1 MB, arrayBuffers: 17.9 MB"}
{"level":20,"time":"2024-05-04T17:17:54.771Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1625.5 MB, heapTotal: 1128.3 MB, heapUsed: 1080.7 MB, external: 22.7 MB, arrayBuffers: 20.5 MB"}
{"level":20,"time":"2024-05-04T17:18:44.165Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1542.6 MB, heapTotal: 1045.3 MB, heapUsed: 1005.2 MB, external: 28.1 MB, arrayBuffers: 25.9 MB"}
{"level":20,"time":"2024-05-04T17:19:59.286Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1537.4 MB, heapTotal: 1040.1 MB, heapUsed: 999.7 MB, external: 27.7 MB, arrayBuffers: 25.5 MB"}
{"level":20,"time":"2024-05-04T17:21:13.734Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1559.5 MB, heapTotal: 1062.2 MB, heapUsed: 1018.5 MB, external: 24.9 MB, arrayBuffers: 22.7 MB"}
{"level":20,"time":"2024-05-04T17:22:55.906Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1451.1 MB, heapTotal: 953.5 MB, heapUsed: 884.3 MB, external: 20.6 MB, arrayBuffers: 18.4 MB"}
{"level":20,"time":"2024-05-04T17:27:09.221Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1508.3 MB, heapTotal: 1012.0 MB, heapUsed: 966.9 MB, external: 22.0 MB, arrayBuffers: 19.8 MB"}
{"level":20,"time":"2024-05-04T17:30:00.773Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1571.2 MB, heapTotal: 1057.0 MB, heapUsed: 1005.2 MB, external: 20.7 MB, arrayBuffers: 18.5 MB"}
{"level":20,"time":"2024-05-04T17:31:24.513Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1557.7 MB, heapTotal: 1042.8 MB, heapUsed: 946.9 MB, external: 20.2 MB, arrayBuffers: 18.0 MB"}
{"level":20,"time":"2024-05-04T17:32:22.989Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1719.9 MB, heapTotal: 1204.9 MB, heapUsed: 1158.5 MB, external: 25.1 MB, arrayBuffers: 22.9 MB"}
{"level":20,"time":"2024-05-04T17:34:29.942Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1503.8 MB, heapTotal: 1024.0 MB, heapUsed: 948.0 MB, external: 21.9 MB, arrayBuffers: 19.6 MB"}
{"level":20,"time":"2024-05-04T17:36:12.414Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1748.3 MB, heapTotal: 1223.9 MB, heapUsed: 1195.1 MB, external: 21.2 MB, arrayBuffers: 19.0 MB"}
{"level":20,"time":"2024-05-04T17:36:54.060Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 1589.9 MB, heapTotal: 1067.0 MB, heapUsed: 986.9 MB, external: 42.8 MB, arrayBuffers: 40.6 MB"}
{"level":20,"time":"2024-05-04T17:37:36.514Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 2731.6 MB, heapTotal: 2087.8 MB, heapUsed: 2030.5 MB, external: 153.8 MB, arrayBuffers: 151.6 MB"}
{"level":20,"time":"2024-05-04T17:38:13.818Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 3906.7 MB, heapTotal: 3241.7 MB, heapUsed: 3163.1 MB, external: 90.8 MB, arrayBuffers: 88.6 MB"}
{"level":20,"time":"2024-05-04T17:39:10.659Z","pid":1358080,"hostname":"cdn","module":"stream.memory","levelName":"debug","msg":"Used memory: rss: 4555.3 MB, heapTotal: 3800.8 MB, heapUsed: 3705.3 MB, external: 185.8 MB, arrayBuffers: 183.6 MB"}

gallery.config.yml

configDir: '/data/ssd/glh/config'
cacheDir: '/data/ssd/glh/cache'

sources:
  - dir: '/data/hdd/tdl'
    excludes:
      - '*.mp4'
      - '*.m4a'
      - '*.mov'
    index: '{configDir}/{basename(dir)}.idx'
    maxFilesize: 20M

extractor:
  apiServer:
    url: http://127.0.0.1:3001
    timeout: 30
    concurrent: 1
  geoReverse:
    url: https://nominatim.openstreetmap.org
  useNative:
    - ffprobe
    - ffmpeg

storage:
  dir: '{cacheDir}'

database:
  file: '{configDir}/database.db'
  maxMemory: 40960
events:
  file: '{configDir}/events.db'

server:
  port: 3000
  host: '0.0.0.0'
  openBrowser: false
  auth:
    users:
      - xxx: '{SHA}xxxx'
  basePath: /
  watchSources: true

logger:
  - type: console
    level: info
  - type: file
    level: debug
    file: '/data/ssd/glh/config/gallery.log'

@xemle
Copy link
Owner

xemle commented May 13, 2024

Hi @a18090

Thank you for your logs. It seams that the memory consumption on the heap grows to much at the end of your logs.

As I wrote earlier: I can not explain currently why the high consumption is required. Currently the time to investigate for such case is also very limited since I am using my spare time to work on the plugin system to open the gallery to custom functions.

I am very sorry but I can not help here currently. I keep it in my mind because it bugs me that the consumption is that high while in theory it should be slim.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants