The answers change slightly if one asks the same question again (an example is shown towards the end in the questions asked to Chat GPT 3.5) and they are known to be influenced by the formulation of the questions.
There are many reports of mistakes in statements from ChatGPT and from similar AI systems. We can be aware of this possibility and develop ways to spot mistakes, as we do with statements from human individuals.
What ChatGPT says about the complementary role that AI can play with scientists, and the advantage due to the absence of some personal biases, seems reasonable.
The AI system mentions that some scientists might be concerned about not receiving credit for ideas, and that this might limit sharing. This is also feedback we received informally when inviting applications for the prize, but the people worrying about this did not mention it on the website.
In this case it is not the concept that is new, but the fact that AI discusses it openly.
This could help different parts of society to address the issue.