Skip to content

Conversation

DaramG
Copy link

@DaramG DaramG commented Nov 16, 2023

There was some internal server error when running run-mac.sh on Mac.
To fix this, I updated llama_cpp_python version to the latest version.
This will resolve #73 and #95 .

@henriquezago
Copy link

It didn't solve my issue (#95).

@adevart
Copy link

adevart commented Apr 14, 2024

This worked for me, thanks. I was having the same issue as #95. I updated the version number then restarted the server and it loaded the model ok.

I get a similar error when loading the 7b chat model but that's due to it being in .bin format instead of .gguf like code-7b and gives the following error, which shows the 500 loading error in the UI:
gguf_init_from_file: invalid magic characters tjgg.
error loading model: llama_model_loader: failed to load model from ./models/llama-2-7b-chat.bin

@cairongquan
Copy link

i am M1Pro base version,whitch version should i install by llama_cpp_python

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

M2 macbook air Internal Error
4 participants