Experts do not think that AI is ready to be ‘co-author’


He declared Google last month « AI counterparts« The company is designed to help scientists in creating an EU, assumptions and research plans. Google made it as a way to open new knowledge, but experts think it’s good for promises – Promises.

« This initial vehicle is not interested in serious use, » Sarah, Sarah, a computer vision researcher at the MIT told TechCrunch. « I am not sure that this type of hypothesis-generation system requires a scientific community. »

Google is the latest Tech giant to develop a sharp acceleration of scientific research in water areas, especially in water areas, especially in water areas. An ilt ilt at the beginning of this yearOpenai CEO Director-General Sam Altman said, « Supertücülül » can accelerate scientific discovery and innovation « , » Supertücul may be massively.  » Similarly, anthropic CEO Dario Amodei forecasted that the EU could Help shape treatments for most cancers.

However, many researchers do not consider it particularly useful to guide the AI ​​scientific process today. Applications such as Google’s AI scientist appears more than anything else, they say, are not supported by empirical data.

For example, in it Blog Post Explaining the AI ​​co-author, Google said that Google demonstrates potential in areas such as a type of blood cancer that affects the bone marrow, demonstrated potential in areas such as drug purchases for a sharp myeloid leukemia. Again, the results are so unconciled, « he said, » he will not take a legal scientist. « 

« It can be used as a good starting point for researchers, but (…) is worried about the lack of detail and does not borrow to trust him, » said Dubyk Techcrunch. « The lack of information provided makes it really difficult to understand that it can be really useful. »

Google has been criticized by the scientific association for the first time to make a means of providing a vehicle for the reproduction of the results.

In 2020, Google claimed One of the trained AI systems to achieve better results than human radioology of breast tumors. Researchers from Harvard and Stanford have a refutation in the magazine NatureSaying a detailed method and lack of code in Google research, « Dismiss » Scientific value (D). « 

Scientists also sparkled on the restrictions of AI instruments targeting scientific subjects such as material engineering. In 2023, the company said that about 40 « new material » synthesis With the help of one of the AI ​​systems called Gnome. Still an analyzed analysis None of the items, in fact, the net was new.

« We will not really understand the strong and restrictions of vehicles related to the serious, independent assessment, independent assessment between different scientific scientists, » we will not really understand.  » « AI performs well in frequent managed environments but may fail when applicable on a scale. »

Complex processes

Part of the problem of preparation of AI instruments to help scientific discovery is waiting for the unusual number of confused factors. AI can occur in areas where there is a wide exploration in the regions, where there is a list of wide opportunities. However, it is less clear whether the AI ​​is capable of the type of out-of -ered problems, which lead to scientific progress.

« We found it throughout history that some of the most important scientific progress as the development of MRA vaccines were managed by human intuitiveness and perseverance in front of skeptics, » said Khudabukhsh. « AI, as it stands today, cannot be good to repeat it. »

EU researcher Lana Sinapayen in Japan in Japan’s computer science laboratories thinks that the tools like Google’s AI scientist is aimed at a miscarriage.

Synapayen sees a real value that can summarize new academic literature or in accordance with the requirements of the grant application, as general, and technically difficult or tedious tasks. However, there is no need for a scientific association for the AI ​​co-author of an AI co-author, and it is a very researcher.

« Many scientists are the most entertaining part of the work, including myself, » said Sinapore Techcrunch. « Why would I want to go out of my computer, and then have many of hard work to make myself misunderstood what people do what people do. »

Beery noted that often the heaviest step in the scientific process is to develop and implement research and analysis to test or refute a hypothesis – it is not necessarily access to current AI systems. AI, of course, cannot use physical tools to make practitioners, and it often operates worse than the existing problems of extremely limited data.

« Most science has a significant part of the scientific process, almost almost almost almost almost almost gather physical, new information and conduct practices in the laboratory. « A large restriction of the systems (like Google’s AI scientist), it is definitely a context of the system and their special research purposes, their skills and researchers using their skills and resources acquired. »

AI Risk

AI’s technical shortcomings and risks – for example, its trend set a management – Be careful not to confirm scientists for serious work.

Hudabukhsh Fear AI tools can simply create noise in scientific literature that does not rise progress.

It’s already a problem. A recent study The EU was found to have a free search engine for the fictional « Junk Science », Google Scholar, Google’s scientific literature.

« If the EU-created research is not carefully monitored, the scientific sphere can flood low quality or even misleading studies and peers, » said Khudabukhsh. « Wrote conferences are a problem in fields such as computer science that the highest conferences are exponentive in the presentation. »

Sinapore said that even well-designed studies may result in the misbehaving AI. Liking the idea of ​​a vehicle that could help with literature reviews and synthesis, Sinapore today, this work will be reliably fulfilled today.

« These are the things that claim to make various existing means, but I do not work, this is not the case I will go to the current AI, » he said. Many AI systems are taught and the amount of energy they consumeAlso. « Although all ethical issues (…) are resolved, the current AI is not valid enough to put my job on one way or another. »



Source link

Leave a Reply

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *