Survey of biomedical scientists (July-August 2023) - Individual replies
see also www.cellcomm.org/survey-planning
Q3. Should biomedical scientists interact preferentially with AI systems that have these features (more transparency and control participation)?
Â
Yes, preferable to support well-trained and well-behaved systems.
Â
It depends.
Â
Maybe, if that's what it takes to get AI systems that meet our needs.
Â
yes
Â
My first instinct is yes, but I would want to know more about what this would actually look like in real life.
Â
Yes
Â
For exposome studies
Â
Yes.
Â
yes
Â
Yes, AI is better
Â
absolutely!
Â
Yes
Â
Yes but under the controls.
Â
Not sure what "features" you mean.
Â
yes
Â
In some instances, it would be preferential, when these features proved reliable. But, for many systems, there is still a long way to reach that reliability (think of Tesla's attempt to develop autonomous driving software)
Â
Yes. but must learn it carefully.
Â
I would imagine yes if the features are effectively created.
Â
Yes -- to promote the use of systems that are transparent.
Â
I would think so - otherwise they may waste effort.
Â
Yes, but there is no easy way to know, as systems are not rated (or evaluated?) on this currently.
Â
yes
Â
yes
Â
Yes
Â
yes
Â
tough question...tentatively yes. Let's keep tab on the industry-wide talks about regulations initiated by President Biden
Â
yes
Â
yes
Â
There should be national and international standards and procedures, and a system of ethics that is generally accepted. Biomedical scientists should contribute to the development of these standards and procedures.
Â
No, such AI systems will be limited.
Â
No
Â
I don't think that will work ultimately.
Â
Yes
Â
Yes
Â
Not necessarily, but they should have the choice of systems with and without these features.
Â
Probably yes, but it depends upon the issues involved. Ethicists also might be necessary.
Â
That's a worthy goal, but scientists are likely to use whatever systems they find perform the best. A "transparent", "controlled" system likely would not be competitive with unconstrained AI systems.
Â
I think AI can be used for good, but it's important to have processes in place to define and identify unethical use.
Â
yes if they can match performance of other AI systems
Â
Not necessary
Â
Preferentially, but this does not guarantee these will be the most accurate systems.
Â
maybe, but assume each scientist can decide for themselves and their own applications of AI
Â
Yes, only use such systems except in limited instances.
Â
Yes. Absolutely
Â
I think we should preferentially work with systems that include biomedical scientists in their regulation - if that is what the question is asking.
Â
yes
Â
Yes.
Â
not sure
Â
Definitely and maybe entirely. AI is already providing false references and misleading info --the time to act is now. Thanks for taking this on.
Â
Yes
Â
yes
Â
Yes
Â
Yes.
Â
As a second layer only - reflexive to a first finding. There should be some guardrails to curate for clear false conclusions...
Â
Yes
Â
yes
Â
Yes.
Â
Yes
Â
Again, not sure what a transparent AI tool is.
Â
Yes, biomedical scientists should interact preferentially with AI systems that are transparent and where feedback from the community will be listened to and incorporated.
Â
Possibly
Â
Yes but there should be clear guidelines for their use.
Â
Yes, AI needs rules to be followed to eliminate lying and fabrication.
Â
Ten years down the road, it will be standard of interaction.
Â
Yes.
Â
don't know - I believe in 'referenced and validated information/approaches/methods
Â
Possibly, only as long as the controls do not put an unreasonable burden on science.
Â
yes
Â
YES!!!
Â
Certainly but only by individual preference
Â
Yes, because they're more likely to be useful and unbiased.
Â
Yes
Â
No.
Â
not answerable at this moment
Â
yes
Â
maybe
Â
yes
Â
Yes, there should be a certification for AI tools that guarantee the respect of fundamental medical privacy principles.
Â
Transparency: yes. Biomedical input on design: no (see rationale above)
Â
The questions that should be asked include: Will scientist be penalized if they do not interact with AI systems? Should scientists be allowed to use AI systems to edit papers and compose grants? Should scientists us AI systems to review papers and grants?
Â
Seems a good idea.
Â
Should be discussed further.
Â
yes
Â
Yes
Â
Yes, if they ever become available
Â
yes
Â
Yes
Â
Not necessarily. Currently we know nothing about how decisions are made by AI. Â
Â
Yes
Â
Yes, we need a quality standard system.
Â
yes
Â
yes
Â
Yes, over other AI systems without these features, while at the same time any AI systems should not be preferentially used over non-AI systems.
Â
No
Â
Sure, but as a community, competitive groups may not want to participate.
Â
Yes
Â
yes
Â
Yes.
Â
Yes
Â
Ethics is paramount
Â
Which features? Regardless, as a non-expert on this topic, I'm not able to answer this in any kind of informed way.
Â
Not sure what this means. Preferentially over what?
Â
AI systems with transparency, explainability, and bias detection capabilities are highly valuable for biomedical scientists. By interacting with such systems, scientists can make informed decisions, ensure ethical practices, and advance medical research and clinical applications.
Â
Sure.
Â
Yes
Â
absolutely
Â
yes
Â
yes
Â
Depends,
Â
Probably.
Â
Preferably, yes
Â
yes
Â
probably
Â
yes
Â
Yes
Â
There should be a trend towards this direction
Â
NO
Â
Yes
Â
Yes, however transparent manner
Â
Yes
Â
yes
Â
The latest news is that the AI companies agree to label items as products of their software systems. The source and date of information used should also be labeled.
Â
Yes, or build their own.
Â
Yes
Â
Transparency in acknowledging when it has been used should be encouraged. Both for idea generation as well as for data analysis. And certainly if and when any of the actual text is used in a report. My own experience with a trial test was that it provided an overview summary at the level of an undergrad term paper, but not the in depth understanding one gets after a few decades working on a topic.
Â
These are the ones that should be promoted for more participation from scientists and clinicians.
Â
Yes.
Â
yes, if at all possible, but we should not lose sight that everybody will be using AI and we should participate in the debate and evaluation of all platforms
Â
All those systems should be under regulation
Â
Yes, while fulfilling ethical requirements
Â
We need to understand why an AI algorithm is yielding any given answer.
Â
Yes
Â
I do not think scientist can claim a right to preferential control.
Â
Yes
Â
Yes.
Â
Yes!
Â
Biomedical scientists should use AI as any other research instrument
Â
Yes
Â
Yes, however it would be difficult to control it - if other AI systems are giving useful, perhaps better results, then those systems will be preferable to some. The question is how to evaluate the impact of the training sets on the outcome/output.
Â
yes
Â
Yes
Â
yes
Â
no answer
Â
Yes
Â
Yes
Â
yes
Â
Yes
Â
yes
Â
Yes, Biomedical scientists should have control over AI methods, how they are trained, and their implementation
Â
may not have the choice which systems one gets to interact with
Â
Yes
Â
Yes
Â
Yes
Â
Yes
Â
I feel that there is no choice but to take a strong position on scientists' interaction/control of AI systems that report on human/animal models of disease.
Â
yes
Â
Yes
Â
Presumably, but TBD
Â
Should and interact are too vague to answer this question. What kinds of interactions? What are the alternatives?
Â
These are not actionable questions.
Â
Yes, for all the ethical advantages that they hopefully will have.
Ideally, yes. But first scientists need to understand how to recognize such AI systems.
Yes
Â