Survey of biomedical scientists (July-August 2023) - Individual replies
see also www.cellcomm.org/survey-planning
Q3. Should biomedical scientists interact preferentially with AI systems that have these features (more transparency and control participation)?
Yes, preferable to support well-trained and well-behaved systems.
It depends.
Maybe, if that's what it takes to get AI systems that meet our needs.
yes
My first instinct is yes, but I would want to know more about what this would actually look like in real life.
Yes
For exposome studies
Yes.
yes
Yes, AI is better
absolutely!
Yes
Yes but under the controls.
Not sure what "features" you mean.
yes
In some instances, it would be preferential, when these features proved reliable. But, for many systems, there is still a long way to reach that reliability (think of Tesla's attempt to develop autonomous driving software)
Yes. but must learn it carefully.
I would imagine yes if the features are effectively created.
Yes -- to promote the use of systems that are transparent.
I would think so - otherwise they may waste effort.
Yes, but there is no easy way to know, as systems are not rated (or evaluated?) on this currently.
yes
yes
Yes
yes
tough question...tentatively yes. Let's keep tab on the industry-wide talks about regulations initiated by President Biden
yes
yes
There should be national and international standards and procedures, and a system of ethics that is generally accepted. Biomedical scientists should contribute to the development of these standards and procedures.
No, such AI systems will be limited.
No
I don't think that will work ultimately.
Yes
Yes
Not necessarily, but they should have the choice of systems with and without these features.
Probably yes, but it depends upon the issues involved. Ethicists also might be necessary.
That's a worthy goal, but scientists are likely to use whatever systems they find perform the best. A "transparent", "controlled" system likely would not be competitive with unconstrained AI systems.
I think AI can be used for good, but it's important to have processes in place to define and identify unethical use.
yes if they can match performance of other AI systems
Not necessary
Preferentially, but this does not guarantee these will be the most accurate systems.
maybe, but assume each scientist can decide for themselves and their own applications of AI
Yes, only use such systems except in limited instances.
Yes. Absolutely
I think we should preferentially work with systems that include biomedical scientists in their regulation - if that is what the question is asking.
yes
Yes.
not sure
Definitely and maybe entirely. AI is already providing false references and misleading info --the time to act is now. Thanks for taking this on.
Yes
yes
Yes
Yes.
As a second layer only - reflexive to a first finding. There should be some guardrails to curate for clear false conclusions...
Yes
yes
Yes.
Yes
Again, not sure what a transparent AI tool is.
Yes, biomedical scientists should interact preferentially with AI systems that are transparent and where feedback from the community will be listened to and incorporated.
Possibly
Yes but there should be clear guidelines for their use.
Yes, AI needs rules to be followed to eliminate lying and fabrication.
Ten years down the road, it will be standard of interaction.
Yes.
don't know - I believe in 'referenced and validated information/approaches/methods
Possibly, only as long as the controls do not put an unreasonable burden on science.
yes
YES!!!
Certainly but only by individual preference
Yes, because they're more likely to be useful and unbiased.
Yes
No.
not answerable at this moment
yes
maybe
yes
Yes, there should be a certification for AI tools that guarantee the respect of fundamental medical privacy principles.
Transparency: yes. Biomedical input on design: no (see rationale above)
The questions that should be asked include: Will scientist be penalized if they do not interact with AI systems? Should scientists be allowed to use AI systems to edit papers and compose grants? Should scientists us AI systems to review papers and grants?
Seems a good idea.
Should be discussed further.
yes
Yes
Yes, if they ever become available
yes
Yes
Not necessarily. Currently we know nothing about how decisions are made by AI.
Yes
Yes, we need a quality standard system.
yes
yes
Yes, over other AI systems without these features, while at the same time any AI systems should not be preferentially used over non-AI systems.
No
Sure, but as a community, competitive groups may not want to participate.
Yes
yes
Yes.
Yes
Ethics is paramount
Which features? Regardless, as a non-expert on this topic, I'm not able to answer this in any kind of informed way.
Not sure what this means. Preferentially over what?
AI systems with transparency, explainability, and bias detection capabilities are highly valuable for biomedical scientists. By interacting with such systems, scientists can make informed decisions, ensure ethical practices, and advance medical research and clinical applications.
Sure.
Yes
absolutely
yes
yes
Depends,
Probably.
Preferably, yes
yes
probably
yes
Yes
There should be a trend towards this direction
NO
Yes
Yes, however transparent manner
Yes
yes
The latest news is that the AI companies agree to label items as products of their software systems. The source and date of information used should also be labeled.
Yes, or build their own.
Yes
Transparency in acknowledging when it has been used should be encouraged. Both for idea generation as well as for data analysis. And certainly if and when any of the actual text is used in a report. My own experience with a trial test was that it provided an overview summary at the level of an undergrad term paper, but not the in depth understanding one gets after a few decades working on a topic.
These are the ones that should be promoted for more participation from scientists and clinicians.
Yes.
yes, if at all possible, but we should not lose sight that everybody will be using AI and we should participate in the debate and evaluation of all platforms
All those systems should be under regulation
Yes, while fulfilling ethical requirements
We need to understand why an AI algorithm is yielding any given answer.
Yes
I do not think scientist can claim a right to preferential control.
Yes
Yes.
Yes!
Biomedical scientists should use AI as any other research instrument
Yes
Yes, however it would be difficult to control it - if other AI systems are giving useful, perhaps better results, then those systems will be preferable to some. The question is how to evaluate the impact of the training sets on the outcome/output.
yes
Yes
yes
no answer
Yes
Yes
yes
Yes
yes
Yes, Biomedical scientists should have control over AI methods, how they are trained, and their implementation
may not have the choice which systems one gets to interact with
Yes
Yes
Yes
Yes
I feel that there is no choice but to take a strong position on scientists' interaction/control of AI systems that report on human/animal models of disease.
yes
Yes
Presumably, but TBD
Should and interact are too vague to answer this question. What kinds of interactions? What are the alternatives?
These are not actionable questions.
Yes, for all the ethical advantages that they hopefully will have.
Ideally, yes. But first scientists need to understand how to recognize such AI systems.
Yes