"I'm particularly concerned that these models can be used for large-scale disinformation. Now that they're getting better at writing computer code, they could be used for offensive cyber attacks," he said in the interview.
The CEO of OpenAI, Sam Altman, has expressed concern about the potential dangers of advancing artificial intelligence in an interview with ABC News, which was also reported by The Guardian. Altman warned that although AI can be a valuable tool for humanity, care must be taken regarding potential malicious uses, such as large-scale disinformation or offensive cyber attacks.
Altman emphasized that OpenAI is committed to developing artificial intelligence in a safe and responsible manner, but noted that other individuals and companies may not impose the same security limits as they do. "I think society has a limited amount of time to figure out how to react to that, how to fix it, how to handle it," he said.
In addition, Altman also addressed the fear that machines may replace humans in the future. Although he assured that for now, artificial intelligence is largely under human control, he admitted that it is possible that other people and companies may not impose the same security limits as they do. Altman believes that society must act quickly to discover how to regulate and manage this technology.
In the interview, Altman also discussed the "hallucination problem" that has been observed in ChatGPT, the AI-driven language model created by OpenAI. The model can confidently assert things as if they were completely invented facts, and Altman warns that it is important to think of AI models as a reasoning engine, not a database of facts.
In recent days, both Elon Musk and Bill Gates have expressed their concerns about the dangers of artificial intelligence. Musk has said that AI is "more dangerous than a nuclear weapon," while Gates has warned of the risks and concerns that exist, such as the threat posed by humans armed with AI and the possibility of AI getting out of control. In summary, there seems to be a growing consensus on the need to address the potential risks of artificial intelligence and to develop it in a safe and responsible manner. The question is, why did they not say this before?
Tu opinión enriquece este artículo: