Skip to content

Commit f88f5fe

Browse files
committed
updated
1 parent 33e2a0f commit f88f5fe

File tree

1 file changed

+20
-8
lines changed

1 file changed

+20
-8
lines changed

docs/api/llm/inference.md

+20-8
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,12 @@ cbbaseinfo:
77
cbparameters:
88
parameters:
99
- name: message
10-
typeName: string
10+
typeName: object
1111
description: The input message or prompt to be sent to the LLM.
1212
- name: llmrole
1313
typeName: string
14-
description: The role of the LLM to determine which model to use.
14+
description: The role of the LLM to determine which model to use
15+
1516
returns:
1617
signatureTypeName: Promise
1718
description: A promise that resolves with the LLM's response.
@@ -28,22 +29,33 @@ data:
2829

2930
### Example
3031

31-
```js
32+
```bash
33+
js
3234

33-
const question = "Write an API to get all users from the User Table.";
34-
const llmRole = "assistant";
35+
let message={
36+
messages:[{
37+
"role":"system",
38+
"content":"you are developer agent expert in writing code"
39+
},{
40+
"role":"user",
41+
"content":"crete node js project"
42+
}],
43+
tools:[],
44+
tool_choice: "auto",//if useing any tools
45+
}
3546

36-
const response = codebolt.llm.inference(question, llmRole);
37-
console.log(response);
3847

48+
const response = codebolt.llm.inference(message);
49+
console.log(response);
3950

4051
```
4152

4253

54+
4355
### Explaination
4456

4557
The codebolt.llm.inference function allows you to send an inference request to a Large Language Model (LLM) and retrieves the model's response. It has two parameter:
4658

4759
question (string): This parameter represents the input question or prompt you want to send to the LLM for inference.
4860

49-
llmRole (string): This parameter specifies the role or type of Large Language Model (LLM) you want to use for inference. The role determines which variant of the LLM is selected for processing the input question and generating the response.
61+
llmRole (string): This parameter specifies the role or type of Large Language Model (LLM) you want to use for inference. The role determines which variant of the LLM is selected for processing the input question and generating the response. LLMs role can be optional.

0 commit comments

Comments
 (0)