You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug Report
Exception on Prompt callback
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Example Code
using Gpt4All;
var modelFactory = new Gpt4AllModelFactory();
if (args.Length < 2)
{
Console.WriteLine($"Usage: Gpt4All.Samples ");
return;
}
var modelPath = args[0];
var prompt = args[1];
using var model = modelFactory.LoadModel(modelPath);
var result = await model.GetStreamingPredictionAsync(
prompt,
PredictRequestOptions.Defaults);
//callback to prompt happens just after bolded code
await foreach (var token in result.GetPredictionStreamingAsync())
{
Console.Write(token);
}
Steps to Recreate
Downloaded latest GPT4All from master (4/23/2024) commit id: baf1dfc
Built the code using the powershell script: build_win-msvc.ps1 (win-x64) using the developer powershell window in VStudio
Open the GPT4All solution in Visual Studio 2022 17.9.6 64 bit and build as Debug x64
Once built I copy the binding DLLs to the debug bin directory
Start debug mode passing in the full path to the model and a simple prompt (What is the capital of florida?)
Model appears to load fine
Exception thrown in LLModel.cs: NativeMethods.llmodel_prompt
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
at Gpt4All.Bindings.NativeMethods.llmodel_prompt(IntPtr, System.String, LlmodelPromptCallback, LlmodelResponseCallback, LlmodelRecalculateCallback, Gpt4All.Bindings.llmodel_prompt_context ByRef)
at Gpt4All.Bindings.LLModel.Prompt(System.String, Gpt4All.Bindings.LLModelPromptContext, System.Func`2<Gpt4All.Bindings.ModelPromptEventArgs,Boolean>, System.Func`2<Gpt4All.Bindings.ModelResponseEventArgs,Boolean>, System.Func`2<Gpt4All.Bindings.ModelRecalculatingEventArgs,Boolean>, System.Threading.CancellationToken)
at Gpt4All.Gpt4All+<>c__DisplayClass10_0.<GetStreamingPredictionAsync>b__0()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(System.Threading.Thread, System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef, System.Threading.Thread)
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()
Expected Behavior:
The model response written to the console
Environment:
latest GPT4All from master, Model: gpt4all-13b-snoozy-q4_0.gguf, Windows 11, The current console Sample, Core 8, .NET 8
I saw others had this exception but during Model loads. Let me know what additional info might help or even some tip on debugging it. The handle passed into Prompt seems valid.
The text was updated successfully, but these errors were encountered:
debugging into the C++ code the exception is at
if (size_t(ctx->n_past) < wrapper->promptContext.tokens.size())
in
void llmodel_prompt(llmodel_model model, const char *prompt,
*const char prompt_template,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
llmodel_recalculate_callback recalculate_callback,
llmodel_prompt_context *ctx,
bool special,
const char *fake_reply)
Viewing ctx in a watch window gives a cant read memory for the values.
This is passed in from csharp as context.UnderlyingContext which has valid values prior to getting passed.
Seems like a code mismatch between the C method and the binding code I have. prompt_template is not a parm in the CSharp code. Neither is special.
Looks like the csharp bindings in main are not in sync with the changes in the C code around prompt parameters.
Specially the prompt_template and special flag.
cebtenzzre
changed the title
CSharp Binding AccessViolationException: Attempted to read or write protected memory. in LLModel Prompt Method
C# bindings need to be updated
May 1, 2024
Bug Report
Exception on Prompt callback
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Example Code
using Gpt4All;
var modelFactory = new Gpt4AllModelFactory();
if (args.Length < 2)
{
Console.WriteLine($"Usage: Gpt4All.Samples ");
return;
}
var modelPath = args[0];
var prompt = args[1];
using var model = modelFactory.LoadModel(modelPath);
var result = await model.GetStreamingPredictionAsync(
prompt,
PredictRequestOptions.Defaults);
//callback to prompt happens just after bolded code
await foreach (var token in result.GetPredictionStreamingAsync())
{
Console.Write(token);
}
Steps to Recreate
Exception thrown in LLModel.cs: NativeMethods.llmodel_prompt
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
Expected Behavior:
The model response written to the console
Environment:
latest GPT4All from master, Model: gpt4all-13b-snoozy-q4_0.gguf, Windows 11, The current console Sample, Core 8, .NET 8
I saw others had this exception but during Model loads. Let me know what additional info might help or even some tip on debugging it. The handle passed into Prompt seems valid.
The text was updated successfully, but these errors were encountered: