Skip to content

Updating to latest llama.cpp #2

Open
AshtonOhms wants to merge 9 commits intoddellacosta:mainfrom
AshtonOhms:wip
Open

Updating to latest llama.cpp #2
AshtonOhms wants to merge 9 commits intoddellacosta:mainfrom
AshtonOhms:wip

Conversation

@AshtonOhms
Copy link
Copy Markdown

@AshtonOhms AshtonOhms commented Dec 22, 2023

Hi! I updated the bindings to be compatible with the latest head of llama.cpp master - enough to get the Main.hs example working again.

Summary:

  • A few functions no longer exist in llama.h: Namely, the *FromModel functions
  • The old sampleRepetitionPenalty and sampleFrequencyAndPresencePenalties functions are now replaced with the sole sampleRepetitionPenalties function
  • New function in wrapper.c for new_context_with_model
  • llama.h's token_to_str has been replaced with token_to_piece. A corresponding wrapper has been added to the Main.hs example

Tested by running the Mixtral MoE model, and it seems to be working properly.
stack run examples -- -m ../mixtral/mixtral-8x7b-instruct-v0.1.Q2_K.gguf -p "Hello, my name is " -t 12

I probably should do a version bump of some kind, but I'm not sure - let me know your thoughts. Cheers!

@AshtonOhms AshtonOhms changed the title Updating to Updating to latest llama.cpp Dec 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant