You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to increase the memsize to 4x the original hardcoded value, and it can now process the big image. However, I think we should dynamically calculate the memsize instead of hardcoding it.
Besides that , I think we should not free the image in the preprocess_canny because the memory may be alloced by 3rd language.
free(img); <---- we should remove this call
uint8_t* output = sd_tensor_to_image(image);
The text was updated successfully, but these errors were encountered:
chinshou
changed the title
memsize was hardcoding in preprocess_canny function
memsize was hardcoded in preprocess_canny function
May 10, 2024
I tried to preprocess_canny a bitmap with a size of 1024x1024. Due to the hardcoding of the memsize,
the following code will raise an assert error:
I tried to increase the memsize to 4x the original hardcoded value, and it can now process the big image. However, I think we should dynamically calculate the memsize instead of hardcoding it.
Besides that , I think we should not free the image in the preprocess_canny because the memory may be alloced by 3rd language.
The text was updated successfully, but these errors were encountered: