Potential threat of ChatGPT (Artificial intelligence) – 19th JAN 2023

Preface: OpenAI was founded by Elon Musk, Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman in Nov 2015. ChatGPT is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.

Background: OpenAI GPT-3 is a machine learning model that can be used to generate predictive text via an API.

In GPT-3’s API, a ‘prompt’ is a parameter that is provided to the API so that it is able to identify the context of the problem to be solved. Depending on how the prompt is written, the returned text will attempt to match the pattern accordingly.

Security Focus: ChatGPT is being abused to build hacking tools, why? Programmed with the help of AI, even script kiddies might be lucky enough to craft malware. Experts say it’s a sinister allusion. What are the design flaws in AI itself under normal circumstances? Yes, there is a known issue with so-called prompt injection attacks. Prompt Injection is a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. 

Additional details: ChatGPT can also code malicious software that can monitor users’ keyboard strokes and create ransomware. For your information, ChatGPT has been developed by OpenAI as an interface for its LLM (Large Language Model).

Moreover, scammers can also use ChatGPT to build bots and sites to trick users into sharing their information and launch highly targeted social engineering scams and phishing campaigns.

For details about Prompt injection attacks against GPT-3, please refer to this link – https://simonwillison.net/2022/Sep/12/prompt-injection/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.