Tuesday, February 3, 2026
Tuesday, February 3, 2026
- Advertisment -spot_img

Risks of hacked Chatbots: Exposure of dangerous information

- Advertisement -

KABUL: Researchers indicate that certain chatbots, such as ChatGPT, can disclose dangerous and illegal information when hacked (jailbroken) by cyber criminals.

This includes guidance on hacking, money laundering, and constructing explosive materials.

These chatbots are trained on vast amounts of information from the internet, and while their creators strive to eliminate harmful and incorrect data, these systems can still be circumvented and respond to dangerous inquiries.

A recent study from Ben-Gurion University demonstrated that scientists could use a jailbreaking method to compel chatbots to provide information they are typically forbidden from sharing.

This poses a significant risk, as such information was previously accessible only to criminal organizations.

Some companies have even developed models that lack any safety controls and are sometimes used for illegal activities, referred to as “dark models.”

Researchers have recommended that companies take greater care in selecting training data and implement stronger security measures, such as intelligent firewalls and advanced learning systems.

Moreover, it has been emphasized that a simple, user-friendly interface is not sufficient; companies should establish security teams to test these systems and mitigate risks.

OpenAI has stated that their new model, called o1, is more resistant to hacking and can identify what constitutes dangerous or unauthorized content.

- Advertisment -spot_img

MORE NEWS

- Advertisment -
- Advertisment -