Stylize Fun is Your Ultimate Source for the Latest Lifestyle News, Trends, Tips in Health, Fashion, Travel and Food.
⎯ 《 Stylize • Fun 》

OpenAI launches webcrawler GPTBot, and instructions on how to block it

2023-08-08 12:25
OpenAI has launched a web crawler to improve artificial intelligence models like GPT-4. Called GPTBot,
OpenAI launches webcrawler GPTBot, and instructions on how to block it

OpenAI has launched a web crawler to improve artificial intelligence models like GPT-4.

Called GPTBot, the system combs through the Internet to train and enhance AI's capabilities. Using GPTBot has the potential to improve existing AI models when it comes to aspects like accuracy and safety, according to a blog post by OpenAI.

"Web pages crawled with the GPTBot user agent may potentially be used to improve future models and are filtered to remove sources that require paywall access, are known to gather personally identifiable information (PII), or have text that violates our policies," reads the post.

Websites can choose to restrict access to the web crawler, however, and prevent GPTBot from accessing their sites, either partially or by opting out entirely. OpenAI said that website operators can disallow the crawler by blocking its IP address or on a site's Robots.txt file.

SEE ALSO: Google's Bard AI chatbot is vulnerable to use by hackers. So is ChatGPT.

Previously, OpenAI has landed in hot water for how it collects data and for things like copyright infringement and privacy breaches. This past June, the AI platform was sued for "stealing" personal data to train ChatGPT.

Its opt-out functions were only recently implemented, with features like disabling chat history allowing users more control over what personal data can be accessed.

ChatGPT 3.5 and 4 were trained on online data and text dating up to Sept. 2021. There is currently no way to remove content from that dataset.

How to prevent GPTBot from using your website's content

According to OpenAI, you can disallow GPTBot by adding it to your site's Robots.txt, which is essentially a text file that instructs web crawlers on what they can or cannot access from a website.

Credit: Screenshot / OpenAI.

You can also customize what parts a web crawler can use, allowing certain pages and disallowing others.

Credit: Screenshot / OpenAI.