AGI-to-AGI: Meaning and deep research is the expansion of the AI ​​statistical forecast for a structural problem-solving


Join our daily and weekly newsletters for the latest updates and exclusive content in the industry’s leading AI coverage. Learn more


Developed in one of AI amazing tempo. What seems like science fantastic a few years ago is now an undeniable reality. In 2017, the company launched the EI Center for Perfection. Of course, the predictive analysts have been improving further and many machine learning (ML) algorithms have been used for voice recognition, spam detection, spelling inspection (and other applications) – but it was early. We believe that we are only in the first shot of the AI ​​game.

The arrival of GPT-3 and especially the GPT 3.5 is a key turning point for the first Chatgpt in 3.5 – November 2022, and was a dramatic turning point remembered as a « CHATGEPT moment ».

Since then, hundreds of companies have exploded to explosure AI. In March 2023, Openai left GPT-4 « AGI spark« (Artificial intelligence). Until then, it became clear that we were outside the first innings. Now, it feels that we are in the last lingering of a different sport.

AGI flame

It takes two years, AGI flames begin to appear.

In the episode of the last hard fork podcastingDario Amodei – Earlier in the EU industry, previously opened and research as the General Director of Anthrop, and until the end of the decade, it is more smart than people, 2026 or 2027. « 

Anthropic CEO Dario Amodei appears in a stiff hook podcast. Source: https://www.youtube.com/watch?v=yhgusivsn_y

According to this forecast, evidence becomes clearer. Late last summer, Openai launched O1 – The first « justifying model ». O3 released and published their own justification models, including other companies, Google and, famous, DeepSEEK. Mendinals use the thinking chain (cot) by taking a large number of logical steps, as a person can approach a complex work. Recently, the developed AI agents, including Openai’s deep research and Google’s AI scientist, have recently revealed how big changes will be made.

Unlike pre-prepared large language models (LLMS), from training information Rationale models The statistical forecast represents a reasonable turn to resolve structured problem. This allows the AI ​​to solve novel problems, allowing them to think true compared to the advanced pattern recognition.

I recently used deep research for a project and reminded the offer from Arthur C Clarke: « Any fairly advanced technology is inseparable from the magic. » After five minutes, this AI would take me for 3-4 days. Was it perfect? No, was it close? Yes, too. These agents quickly become magical and transformative and are the first of many powerful agents that will soon market.

AGI’s most common definition is a system that can do almost anything cognitive assignment can make a person. These early agents of the changes show that AmothieSi and others who believe we are close to AMOTHISTION and the AGI will be here. This reality will lead to a large number of variations that require people and processes in a short manner.

So really AGI?

There are various scenarios that can arise from a close-time to the close term of the AI. It is difficult and terrible that we don’t really know how it will happen. New York Times Columnist Ezra Klein addressed this to a The last podcast: « We are in a hurry to AGI without really understanding what it is or what it means. » For example, around the effects and for example, this claims that it is almost what this is almost meant to employ, it is almost what is planning to plan a critical thinking or emergency.

Of course, there is another perspective for the absence of an uncertain future and the plan, Gary Marcus will not lead to AGI as an example by Gary Marcus, which is generally (and LLMS special). Marks given Taking into account the remarkable shortcomings in current AI technology, how much does Klein take the position of Klein, as the probability that we are a long way to AGI.

Marcus can be right, but it can be an academic dispute just about these semantics. An alternative to AGI, Amodei belongs to « Strong AI » in just loving gross machines macalA similar idea without knowledge of knowledge, « Sci-Fi baggage and hype ». Call what you want, but the AI ​​will grow stronger.

Play by fire: Possible AI futures

One 60 minutes interviewAlphabet Director Sundar Pichai said he was working on the « deepest technology » he thought about AI. Fire, electricity or deeper than anything we do in the past. « Of course, it is a world-changing discovery of the AI ​​discussions. The fire was a world-changing discovery, but the disaster requires control. The same elegant balance belongs to AI today.

The discovery of a large power, hot, cooking, metallurgy and fire interview by providing the capabilities of the industry. However, it was destroyed. We will depend on how well we will manage the flames of AI whether we are our biggest ally or not. There are various scenarios that can come from a stronger AI to further increase this metaphor:

  1. Controlled Flame (Utopia): In this scenario, the AI ​​is used as a force for human welfare. Productivity, new materials are detected, new materials are detected, individual medications for all and are plenty of goods and services and are cheaper and are cheaper and are cheaper. This is the champion by a large number of accelerators, which makes the EU progress without developing us in many chaos.
  2. Unstable Fire (Difficult): Here AI, the undeniable benefits – Revolutionary research, automation, new opportunities, products and problems to solve. However, these benefits are unevenly distributed – while some developing, others expand displacement, economic units and stressful social systems. Wrong information spreads and security risks mounting. In this scenario, the society is struggling to establish a promise and danger. This image can be claimed to be close to the current reality.
  3. Wildfire (Dystopia): The third way is one of the disasters, « Doomers » and « DOOMERS » and « probability probability » are likely to be the most strongest. Unturned results, AI systems operating outside of careless placement or human control, AI actions are not checked and accidents occur. Trust to the truth. In the worst case, AI Spirals threaten a spiral outside of control, life, industry and all institutions.

Although each of these scenarios appears acceptable, it is concerned that it is likely that the schedule may be short. We can see the early signs of each of each: the scale spreads productivity, incorrect information, incorrect information, which has the obedience and concerns against the guardians, increases the wrong information. Each scenario would lead to individuals, businesses, government and society.

The lack of clarity over the trajectory for the AI ​​blow shows that some mixes of all three futures are inevitable. AI’s rise will bring a paradox, fuel-filling prosperity when bringing unexpected results. Amazing improvements will take place as accidents. Some new areas will appear with tantalization capabilities and business prospects, other Stalwarts of the economy will become bankrupt.

We can have all the answers, but the future of strong AI and the influence of humanity is now written. The recent Paris was the mentality of making the best hopes for a smart strategy, which they saw on the peak of AI AI. Governments, enterprises and individuals must form a trajectory of AI before shaping us. The future of AI will not be determined by single technology, but we ensure how to do it by collective choices.

Gary Grossman, EU Technology Experience Sketch.



Source link

Leave a Reply

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *