Prognosis of exploration on Chat GPT with artificial intelligence ethics - (2023)
Acessos: 41
N. Gowri Vidhya, D. Devi, Nithya A., T. Manju
Volume: 2 - Issue: 9
Resumo.
Natural language processing innovations in the past few decades have made it feasible to synthesis and comprehend coherent text in a variety of ways, turning theoretical techniques into practical implementations. Both report summarizing software and sectors like content writers have been significantly impacted by the extensive Language-model. A huge language model, however, could show evidence of social prejudice, giving moral as well as environmental hazards from negligence, according to observations. Therefore, it is necessary to develop comprehensive guidelines for responsible LLM (Large Language Models). Despite the fact that numerous empirical investigations show that sophisticated large language models has very few ethical difficulties, there isn't a thorough investigation and consumers study of the legality of present large language model use. We use a qualitative study method on OpenAI's ChatGPT3 to solution-focus the real-world ethical risks in current large language models in order to further guide ongoing efforts on responsibly constructing ethical large language models. We carefully review ChatGPT3 from the four perspectives of bias and robustness. According to our stated opinions, we objectively benchmark ChatGPT3 on a number of sample datasets. In this work, it was found that a substantial fraction of principled problems are not solved by the current benchmarks; therefore new case examples were provided to support this. Additionally discussed were the importance of the findings regarding ChatGPT3's AI ethics, potential problems in the future, and helpful design considerations for big language models. This study may provide some guidance for future investigations into and mitigation of the ethical risks offered by technology in large Language Models applications.
Idioma: English
Registro: 2023-10-13 09:29:11
https://brazilianjournalofscience.com.br/revista/article/view/372