A COMMUNITY-WIDE STUDY OF CELL-CELL COMMUNICATION

Survey Planning
Background information about the AI survey and initial results
Some of us organized in 2020 a survey of all NIH grantees about the COVID-19 response. We received more than 4,000 replies and Nature wrote a piece about it:
https://www.nature.com/articles/d41586-020-01154-6
This is an example of how community-wide surveys can have positive effects in biomedicine.
The next survey promotes a discussion about the control of Artificial Intelligence (AI) by the biomedical community. It is a topic important for the study of cell-cell communication but clearly also of broader relevance for biology and medicine.
​
Background information
​
Can we really expect biomedical scientists to act as a community?
There are several reasons why this is a realistic prospect:
- We show historical sources and an analysis describing times when scientist in molecular biology were part of a community freely exchanging ideas. This later changed, in part due to increased competitive concerns. In the case of the topic of this survey, however, there would be no individual advantage in secrecy.
- Another change compared to those earlier times is the increase in size of the biomedical community. There is however an example of a scientific field, particle physics, that has developed a mechanism for community-wide discussions of ideas, when faced with problems that require large, concerted efforts.
- Global collaborations in biology are emerging when the problem to be addressed requires it. One of these is the Human Cell Atlas. You can read an interview with Aviv Regev and Sarah Teichmann, the leaders of this initiative, which now involves more than 3,000 scientists from 95 different countries. Another example is the response to the COVID-19 pandemics described by Alessandro Sette.
- The development of Artificial Intelligence challenges human understanding. Historians and social scientists have found that external threats often make groups more cohesive, strengthening group identity, in this case human identity. We are planning to obtain interviews with historians about this topic. The first of these interviews was with Carlo Ginzburg.
Why would more transparency be helpful?
We cannot always explain the reasons on which AI statements are based. As discussed in the case of medical applications by Ghassemi et al (1) this is in part due to the nature of the algorithms, and in some cases we can mainly expect validation rather than explanation. Obstacles to understanding, however, are also due to incomplete disclosure by the developers of the data used for training, of the details of the methods and of the validation steps. Burnell et al (2) show how published reports of AI systems performance often do not provide sufficient information for a complete evaluation. In a recent article (3) Melanie Mitchell stated that:
"To scientifically evaluate claims of humanlike and even superhuman machine intelligence, we need more transparency on the ways these models are trained, and better experimental methods and benchmarks. Transparency will rely on the development of open-source (rather than closed, commercial) AI models. "
How does interacting with humans contribute to the training of AI systems?
An example is "reinforcement learning from human feedback". This method has been mentioned as one of the main factors for the success of ChatGPT (4). It has been developed by scientists at OpenAI with the aim of training AI systems "to do what a given set of humans want them to do." (5) Similar approaches are likely to be used in more advanced systems (6).
Who should control AI systems?
Different parts of society can play a role in AI control (7). The scientific community can participate in this control and make sure that fundamental scientific knowledge remains a public resource.
The case of drug development shows how private companies can contribute to specific applications but also benefit from openly shared fundamental biological knowledge. AI systems contain biological knowledge in a form that is not equivalent to that available from articles, books or individual human experts.
You can read the responses to the survey given by two AI systems, ChatGPT and Bard.
Initial survey results (July-August 2023)
​
The survey received 186 responses. 80% of respondents had previously received an NIH grant (among them a Nobel Prize winner). Most of the other respondents were international scientists that have published papers included in Pubmed.
​
After a short introduction the survey asked 3 questions and this page was mentioned for those that wanted more background information. Here are the questions and a summary of the responses:
​
Artificial Intelligence (AI) systems contain a large and increasing amount of fundamental knowledge about human biology. Interacting with AI systems contributes to training them; an example of this is reinforcement learning from human feedback.
Q1. Do you think that it is useful for the biomedical community to discuss the control of AI systems?
​
96% of respondents answered YES to this fixed choice question.
​
​
​
​
Q2. Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the biomedical community participates in the control?
​
You can see here all the individual responses to this question.
Scientists provided a wider range of ideas compared to the responses from AI systems mentioned above.
​
Q3. Should biomedical scientists interact preferentially with AI systems that have these features?
​
You can see here all the individual responses to this question.
54% of respondents answered YES or gave an equivalent response, 4% answered NO, 29% made a more nuanced comment and 12% did not answer this question.
​
Our aim is to stimulate a thoughtful collective evaluation of this important issue.
Several respondents have expressed an interest in contributing in different ways to this process and you are all welcome to do so. You can, for example, suggests relevant papers or experts that might provide additional opinions.
Information addressing points made by the initial survey respondents will be presented here, and we will then invite more scientists to provide their opinions in a second round of the survey.
​
​
REFERENCES
1- Ghassemi M, Oakden-Rayner L, Beam AL. "The false hope of current approaches to explainable artificial intelligence in health care." The Lancet Digital Health. 2021 Nov 1;3(11):e745-50.
2- Burnell R, Schellaert W, Burden J, Ullman TD, Martinez-Plumed F, Tenenbaum JB, Rutar D, Cheke LG, Sohl-Dickstein J, Mitchell M, Kiela D. "Rethink reporting of evaluation results in AI." Science. 2023 Apr 14;380(6641):136-8.
3- Mitchell M. "How do we know how smart AI systems are?". Science. 2023 Jul 13;381(6654):adj5957.
4- Heaven WD "The inside story of how ChatGPT was built from the people who made it."
MIT Technology Review 2023
5- Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C, Mishkin P, Zhang C, Agarwal S, Slama K, Ray A, Schulman J. "Training language models to follow instructions with human feedback." Advances in Neural Information Processing Systems. 2022 Dec 6;35:27730-44.
​
6- Thirunavukarasu AJ, Ting DS, Elangovan K, Gutierrez L, Tan TF, Ting DS. "Large language models in medicine." Nature Medicine. 2023 Jul 17:1-1
​
7- Taddeo M, Floridi L. "How AI can be a force for good." Science. 2018 Aug 24;361(6404):751-2.
