AI and science - summary of biomedical scientists’ survey (July-August 2023).
96% of responders replied that it is useful for biomedical scientists to discuss the control of AI systems.
According to the responders, the advantages of promoting the development of transparent AI systems with the participation of the biomedical community are:
Improved Accuracy: Transparent AI systems can lead to more accurate outputs, which can benefit biologists and biomedical scientists.
Bias Mitigation: Transparency can help identify and address biases in AI systems, making them fairer and more equitable.
Trust and Trustworthiness: Increased transparency can enhance trust in AI systems and their results, both within the scientific community and among the general public.
Control and Oversight: Involvement of the biomedical community can provide better control and oversight of AI development, reducing the risk of misuse or unethical practices.
Data Validation: AI systems can benefit from validated data provided by scientists not solely devoted to industry interests.
Collaboration: Collaboration between AI developers and the biomedical community can lead to better understanding of goals and expectations, leveling the playing field for everyone involved.
However, there are potential disadvantages to consider:
Complexity and Time: Developing more transparent AI systems can be complex and time-consuming, potentially slowing down the development process.
Regulation: Excessive regulation can hinder innovation and development in the field.
Data Privacy: Ensuring data privacy and security can be challenging when working with AI systems.
Bias in Control: If not properly managed, control by the biomedical community could introduce its own biases.
Limiting Innovation: Striking a balance between transparency and innovation is crucial, as excessive transparency may limit the development of innovative but less explainable AI approaches.
In summary, promoting transparency in AI development, with active participation from the biomedical community, offers several benefits but also poses challenges that need to be addressed for responsible and effective use of AI in the biomedical field.
The majority of the responders suggest that biomedical scientists should interact preferentially with AI systems that offer transparency and control participation by biomedical scientists (55% of responders answered YES or equivalent, 4% answered NO, 29% made a more nuanced comment and 12% did not answer this question). Features that were mentioned as desirable were:
Transparency: Systems that are transparent and open about their functioning and decision-making processes are preferred.
Ethical Control: Systems that are well-trained and well-behaved, with ethical controls in place.
Involvement of Biomedical Scientists: Systems that involve biomedical scientists in their regulation, design, or feedback processes are seen as advantageous.
Reliability: Reliable systems that have proven their accuracy and trustworthiness.
Certification: Systems that adhere to certain certification standards, such as those guaranteeing medical privacy principles.
However, some responders express uncertainty and there is also recognition that the practicality of preferentially interacting with such AI systems depends on their availability and performance.
Overall, there is a desire for AI systems in the biomedical field to be reliable, transparent, and ethically sound, with involvement from the biomedical community in their development and regulation.