Skip to content

Commit 8cb7ae9

Browse files
authored
oss: updates to google integrations (#1433)
TODO - [ ] add `Modality` to refs so that people know valid options for `response_modalities` param
1 parent aa7a222 commit 8cb7ae9

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

src/oss/javascript/integrations/chat/google_generative_ai.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ console.log(aiMsg.content)
159159
J'adore programmer.
160160
```
161161

162-
## Safety Settings
162+
## Safety settings
163163

164164
Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the safety_settings attribute of the model. For example, to turn off safety blocking for dangerous content, you can import enums from the `@google/generative-ai` package, then construct your LLM as follows:
165165

@@ -441,7 +441,7 @@ console.dir(searchRetrievalResult.response_metadata?.groundingMetadata, { depth:
441441
}
442442
```
443443

444-
### Code Execution
444+
### Code execution
445445

446446
Google Generative AI also supports code execution. Using the built in `CodeExecutionTool`, you can make the model generate code, execute it, and use the results in a final completion:
447447

@@ -540,7 +540,7 @@ The output of the code was:
540540
Therefore, the answer to your question is 21.
541541
```
542542

543-
## Context Caching
543+
## Context caching
544544

545545
Context caching allows you to pass some content to the model once, cache the input tokens, and then refer to the cached tokens for subsequent requests to reduce cost. You can create a `CachedContent` object using `GoogleAICacheManager` class and then pass the `CachedContent` object to your `ChatGoogleGenerativeAIModel` with `enableCachedContent()` method.
546546

@@ -596,7 +596,7 @@ await model.invoke("Summarize the video");
596596

597597
- The minimum input token count for context caching is 32,768, and the maximum is the same as the maximum for the given model.
598598

599-
## Gemini Prompting FAQs
599+
## Gemini prompting FAQs
600600

601601
As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically:
602602

src/oss/python/integrations/chat/google_generative_ai.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -473,7 +473,7 @@ response.content_blocks
473473
{'type': 'text', 'text': 'The calculation of 3 to the power of 3 is 27.'}]
474474
```
475475

476-
## Thinking Support
476+
## Thinking support
477477

478478
See the [Gemini API docs](https://ai.google.dev/gemini-api/docs/thinking) for more info.
479479

src/oss/python/integrations/llms/google_ai.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ For in their embrace, we find a peace profound,
151151
A frozen world, with magic all around.
152152
```
153153

154-
### Safety Settings
154+
### Safety settings
155155

156156
Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:
157157

0 commit comments

Comments
 (0)