How can I avoid rate limit when using Deep Crawl? #1581
Unanswered
nguyenthengocdev
asked this question in
Forums - Q&A
Replies: 1 comment
-
|
I advise you to select the needed elements you need to scrape by using the css_selector attribute inside CrawlerRunConfig method. Otherwise, for more rate limit, use Google Gemeni LLM as it can handle larger tokens than any LLMs. Resources: https://brightdata.com/blog/web-data/crawl4ai-and-deepseek-web-scraping (to see the css_selector) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
How can I avoid rate limit when using Deep Crawl?
Beta Was this translation helpful? Give feedback.
All reactions