This is a standalone configuration type, prefix from configuration root: langchain4j.jlama.chat-model

Configuration options

Table 1. Optional configuration options
key type default value description

auth-token

string

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.authToken(java.lang.String)

enabled

boolean

 

If set to false (default), JlamaChatModel will not be available even if configured.

max-tokens

int

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.maxTokens(java.lang.Integer)

model-cache-path

Path

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.modelCachePath(java.nio.file.Path)

quantize-model-at-runtime

boolean

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.quantizeModelAtRuntime(java.lang.Boolean)

temperature

float

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.temperature(java.lang.Float)

thread-count

int

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.threadCount(java.lang.Integer)

working-directory

Path

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.workingDirectory(java.nio.file.Path)

working-quantized-type

DType (BOOL, U8, I8, I16, U16, F16, BF16, I32, U32, F32, F64, I64, U64, Q4, Q5)

 

Generated from dev.langchain4j.model.jlama.JlamaChatModel.JlamaChatModelBuilder.workingQuantizedType(com.github.tjake.jlama.safetensors.DType)