This is a standalone configuration type, prefix from configuration root: langchain4j.jlama.language-model

Configuration options

Table 1. Optional configuration options
key type default value description

auth-token

string

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.authToken(java.lang.String)

enabled

boolean

 

If set to false (default), JlamaLanguageModel will not be available even if configured.

max-tokens

int

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.maxTokens(java.lang.Integer)

model-cache-path

Path

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.modelCachePath(java.nio.file.Path)

quantize-model-at-runtime

boolean

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.quantizeModelAtRuntime(java.lang.Boolean)

temperature

float

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.temperature(java.lang.Float)

thread-count

int

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.threadCount(java.lang.Integer)

working-directory

Path

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.workingDirectory(java.nio.file.Path)

working-quantized-type

DType (BOOL, U8, I8, I16, U16, F16, BF16, I32, U32, F32, F64, I64, U64, Q4, Q5)

 

Generated from dev.langchain4j.model.jlama.JlamaLanguageModel.JlamaLanguageModelBuilder.workingQuantizedType(com.github.tjake.jlama.safetensors.DType)