Skip to content

v2.6.2

Compare
Choose a tag to compare
@cebtenzzre cebtenzzre released this 01 Feb 17:07
· 211 commits to main since this release

What's Changed

  • Fix crash when deserializing chats with saved context from 2.5.x and earlier (#1859)
  • New light mode and dark mode UI themes (#1876)
  • Update to latest llama.cpp after merge of Nomic's Vulkan PR (#1819, #1883)
  • Much faster prompt processing on Linux and Windows thanks to re-enabled GPU support in the Vulkan backend
  • Support offloading only some layers of the model if you have less VRAM (#1890)
  • Support Maxwell and Pascal Nvidia GPUs (#1895)

Fixes

  • Don't show "retrieving localdocs" if there are no collections (#1874)
  • Fix potential crash when loading fails due to insufficient VRAM (6db5307, Issue #1870)
  • Fix VRAM leak when switching models (Issue #1840)
  • Support Nomic Embed as LocalDocs embedding model via Atlas (d14b95f)

New Contributors

Full Changelog: v2.6.1...v2.6.2