Artificial Intelligence integration into computer systems has led to the development of systems that are either semi-autonomous or fully autonomous. The concept of autonomy is the system’s ability to make decisions and take action based on available information within its reach. With the emergence of generative Artificial Intelligence (GAI) tools and technologies, this integration has become more pronounced. This drive has been propelled largely by the desire of large, medium, and small enterprises to automate most of their processes, leading to significant savings in production costs. However, despite these opportunities, concerns abound regarding the quality of data used by these tools in decision-making. Other concerns include the lack of human supervision for validation and verification, privacy, and ethics, as well as the generation of wrong or inaccurate information (hallucination), among others. In light of these opportunities and threats, empirically led best practices remain the focus of this themed article collection.
The goal of this Research Topic is to focus on empirical evidence on how best to integrate artificial intelligence tools with reduced risks to users. This has become imperative as the dynamics introduced by the emergence of these tools are having far-reaching impacts and changing nearly every sector of human endeavours. It is undisputed that the integration and application of these tools have led to a broad spectrum of novel devices rich in data, with the majority of this data being generated by artificial intelligence-based devices known as synthetic data. This themed article collection also aims to highlight some of the best practices in the industry for the use and integration of AI tools across sectors, as well as explore potential and emerging application areas, their advantages, and disadvantages. Ultimately, the Topic Editors aim to collect articles that empirically provide information on the cybersecurity challenges of integrating AI tools across various human endeavours and strive to find ways to navigate these challenges while promoting the opportunities they offer in enhancing life for humanity and society in general.
Topics covered in this collection include:
• Cybersecurity implications of integrating AI tools across sectors such as manufacturing, education, human resources, and administration
• Security considerations in the design of AI tools for computer systems
• Opportunities and challenges in integrating AI into IoT devices
• Addressing privacy, ethics, and cybersecurity concerns related to AI tools in the industry
• Trust management in processing IoT data
• Explainable AI and Machine Learning Models and their applications in managing IoT privacy
• Emerging trends in Generative AI and LLM applications in intelligent Smart Systems
Keywords:
Data privacy, hallucination, ethics, artificial intelligence, machine learning models
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Artificial Intelligence integration into computer systems has led to the development of systems that are either semi-autonomous or fully autonomous. The concept of autonomy is the system’s ability to make decisions and take action based on available information within its reach. With the emergence of generative Artificial Intelligence (GAI) tools and technologies, this integration has become more pronounced. This drive has been propelled largely by the desire of large, medium, and small enterprises to automate most of their processes, leading to significant savings in production costs. However, despite these opportunities, concerns abound regarding the quality of data used by these tools in decision-making. Other concerns include the lack of human supervision for validation and verification, privacy, and ethics, as well as the generation of wrong or inaccurate information (hallucination), among others. In light of these opportunities and threats, empirically led best practices remain the focus of this themed article collection.
The goal of this Research Topic is to focus on empirical evidence on how best to integrate artificial intelligence tools with reduced risks to users. This has become imperative as the dynamics introduced by the emergence of these tools are having far-reaching impacts and changing nearly every sector of human endeavours. It is undisputed that the integration and application of these tools have led to a broad spectrum of novel devices rich in data, with the majority of this data being generated by artificial intelligence-based devices known as synthetic data. This themed article collection also aims to highlight some of the best practices in the industry for the use and integration of AI tools across sectors, as well as explore potential and emerging application areas, their advantages, and disadvantages. Ultimately, the Topic Editors aim to collect articles that empirically provide information on the cybersecurity challenges of integrating AI tools across various human endeavours and strive to find ways to navigate these challenges while promoting the opportunities they offer in enhancing life for humanity and society in general.
Topics covered in this collection include:
• Cybersecurity implications of integrating AI tools across sectors such as manufacturing, education, human resources, and administration
• Security considerations in the design of AI tools for computer systems
• Opportunities and challenges in integrating AI into IoT devices
• Addressing privacy, ethics, and cybersecurity concerns related to AI tools in the industry
• Trust management in processing IoT data
• Explainable AI and Machine Learning Models and their applications in managing IoT privacy
• Emerging trends in Generative AI and LLM applications in intelligent Smart Systems
Keywords:
Data privacy, hallucination, ethics, artificial intelligence, machine learning models
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.