Skip to main content
View in Github A Backend is an LLM provider, think OpenAI, Anthropic, etc. Enochian provides integrations with the large providers as well as SGLang for local models.

SGLBackend

SGLang is a fast local LLM inference engine. Enochian is also heavily inspired by their frontend language!

setModel

Parameters
  • url: string
Returns
  • Promise<void>
Setting the model in SGLang is asynchronous because it has to send a request to their URL to get the name of the model hosted on it.

Example

const s = await new ProgramState().fromSGL('http://localhost:30000');

OpenAIBackend

setModel

Parameters
  • {baseURL?: string, modelName?: ChatModel}
Returns
  • Promise<void>
Types Referenced: Set the model and/or baseURL of the OpenAI endpoint.

Example

const s = new ProgramState().fromOpenAI({ modelName: 'gpt-4o-mini' });
It works the same way as OpenAI’s node SDK where if you don’t pass in an API key, then it will use the environment variable OPENAI_API_KEY.