Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 4 Visitantes están viendo este tema.
https://elpais.com/tecnologia/2024-05-18/la-distopia-como-marketing-por-que-openai-cree-que-queremos-enamorarnos-de-un-robot.htmlSaludos.
Teams of Coordinated GPT-4 Bots Can Exploit Zero-Day Vulnerabilities, Researchers WarnPosted by EditorDavid on Monday June 10, 2024 @12:44AM from the battle-bots dept.New Atlas reports on a research team that successfuly used GPT-4 to exploit 87% of newly-discovered security flaws for which a fix hadn't yet been released. This week the same team got even better results from a team of autonomous, self-propagating Large Language Model agents using a Hierarchical Planning with Task-Specific Agents (HPTSA) method:CitarInstead of assigning a single LLM agent trying to solve many complex tasks, HPTSA uses a "planning agent" that oversees the entire process and launches multiple "subagents," that are task-specific... When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA has shown to be 550% more efficient than a single LLM in exploiting vulnerabilities and was able to hack 8 of 15 zero-day vulnerabilities. The solo LLM effort was able to hack only 3 of the 15 vulnerabilities."Our findings suggest that cybersecurity, on both the offensive and defensive side, will increase in pace," the researchers conclude. "Now, black-hat actors can use AI agents to hack websites. On the other hand, penetration testers can use AI agents to aid in more frequent penetration testing. It is unclear whether AI agents will aid cybersecurity offense or defense more and we hope that future work addresses this question."Beyond the immediate impact of our work, we hope that our work inspires frontier LLM providers to think carefully about their deployments."Thanks to long-time Slashdot reader schwit1 for sharing the article.
Instead of assigning a single LLM agent trying to solve many complex tasks, HPTSA uses a "planning agent" that oversees the entire process and launches multiple "subagents," that are task-specific... When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA has shown to be 550% more efficient than a single LLM in exploiting vulnerabilities and was able to hack 8 of 15 zero-day vulnerabilities. The solo LLM effort was able to hack only 3 of the 15 vulnerabilities.
Lex Fridman | Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431Saludos.
OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe SuperintelligencePosted by msmash on Wednesday June 19, 2024 @02:23PM from the how-about-that dept.Ilya Sutskever, co-founder of OpenAI who recently left the startup, has launched a new venture called Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. Sutskever has made AI safety the top priority for his new company. Safe Superintelligence has two more co-founders: investor and former Apple AI lead Daniel Gross, and Daniel Levy, known for training large AI models at OpenAI. From a report:CitarResearchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise.Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."
Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise.Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."
Introducing OpenAI o1-previewA new series of reasoning models for solving hard problems. Available starting 9.12Open AI · 2024.09.12We've developed a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.Today, we are releasing the first of this series in ChatGPT and our API. This is a preview and we expect regular updates and improvements. Alongside this release, we’re also including evaluations for the next update, currently in development.How it worksWe trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes. In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions. You can read more about this in our technical research post.As an early model, it doesn't yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images. For many common cases GPT-4o will be more capable in the near term.But for complex reasoning tasks this is a significant advancement and represents a new level of AI capability. Given this, we are resetting the counter back to 1 and naming this series OpenAI o1.SafetyAs part of developing these new models, we have come up with a new safety training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines. By being able to reason about our safety rules in context, it can apply them more effectively. One way we measure safety is by testing how well our model continues to follow its safety rules if a user tries to bypass them (known as "jailbreaking"). On one of our hardest jailbreaking tests, GPT-4o scored 22 (on a scale of 0-100) while our o1-preview model scored 84. You can read more about this in the system card and our research post.To match the new capabilities of these models, we’ve bolstered our safety work, internal governance, and federal government collaboration. This includes rigorous testing and evaluations using our Preparedness Framework, best-in-class red teaming, and board-level review processes, including by our Safety & Security Committee.To advance our commitment to AI safety, we recently formalized agreements with the U.S. and U.K. AI Safety Institutes. We've begun operationalizing these agreements, including granting the institutes early access to a research version of this model. This was an important first step in our partnership, helping to establish a process for research, evaluation, and testing of future models prior to and following their public release.Whom it’s forThese enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.OpenAI o1-miniThe o1 series excels at accurately generating and debugging complex code. To offer a more efficient solution for developers, we’re also releasing OpenAI o1-mini, a faster, cheaper reasoning model that is particularly effective at coding. As a smaller model, o1-mini is 80% cheaper than o1-preview, making it a powerful, cost-effective model for applications that require reasoning but not broad world knowledge. How to use OpenAI o1ChatGPT Plus and Team users will be able to access o1 models in ChatGPT starting today. Both o1-preview and o1-mini can be selected manually in the model picker, and at launch, weekly rate limits will be 30 messages for o1-preview and 50 for o1-mini. We are working to increase those rates and enable ChatGPT to automatically choose the right model for a given prompt.ChatGPT Enterprise and Edu users will get access to both models beginning next week. Developers who qualify for API usage tier 5 can start prototyping with both models in the API today with a rate limit of 20 RPM. We’re working to increase these limits after additional testing. The API for these models currently doesn't include function calling, streaming, support for system messages, and other features. To get started, check out the API documentationWe also are planning to bring o1-mini access to all ChatGPT Free users. What’s nextThis is an early preview of these reasoning models in ChatGPT and the API. In addition to model updates, we expect to add browsing, file and image uploading, and other features to make them more useful to everyone. We also plan to continue developing and releasing models in our GPT series, in addition to the new OpenAI o1 series.
OpenAI CEO Sam Altman Anticipates Superintelligence In 'a Few Thousand Days'Posted by BeauHD on Monday September 23, 2024 @09:30PM from the what-to-expect dept.In a rare blog post today, OpenAI CEO Sam Altman laid out his vision of the AI-powered future, which he refers to as "The Intelligence Age." Among the most notable claims, Altman said superintelligence might be achieved in "a few thousand days." VentureBeat reports:CitarSpecifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
The Intelligence AgeSam Altman · 2024.09.23In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.How did we get to the doorstep of the next leap in prosperity?In three words: deep learning worked.In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
CitarOpenAI CEO Sam Altman Anticipates Superintelligence In 'a Few Thousand Days'Posted by BeauHD on Monday September 23, 2024 @09:30PM from the what-to-expect dept.In a rare blog post today, OpenAI CEO Sam Altman laid out his vision of the AI-powered future, which he refers to as "The Intelligence Age." Among the most notable claims, Altman said superintelligence might be achieved in "a few thousand days." VentureBeat reports:CitarSpecifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.CitarThe Intelligence AgeSam Altman · 2024.09.23In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.How did we get to the doorstep of the next leap in prosperity?In three words: deep learning worked.In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.Saludos.
Publicado en Nature"Preocupante": La inteligencia artificial es cada vez menos fiable, según un estudio internacionalEn el campo de la inteligencia artificial (IA), más grande no significa mejor. Los modelos de lenguaje –sistemas de aprendizaje profundo en los que se basan aplicaciones como ChatGPT– se entrenan con un volumen de datos cada vez mayor. Sin embargo, su fiabilidad ha ido empeorando, según desvela un nuevo estudio de la Universitat Politècnica de Valencia, la Universidad de Cambridge y ValgrAI publicado este miércoles en la prestigiosa revista científica Nature.Los modelos de IA se entrenan con grandes volúmenes de datos extraídos de Internet para ser capaces de generar texto, imágenes, audio o vídeo. Ese proceso funciona mediante un cálculo probabilístico: la máquina compone frases en base a lo que ve más habitualmente en la web. Aunque impactante, esa función conversacional también comete errores, pues detrás de una explicación plausible puede esconderse una mentira. Las grandes empresas tecnológicas que están dando forma a estos chatbots de IA generativa –OpenAI, Microsoft y Google, entre otras– van actualizando y perfeccionando sus modelos usando cada vez más datos para su entrenamiento. Sin embargo, esa forma parece no ser infalible.La investigación señala que incluso los modelos más avanzados siguen generando respuestas erróneas, incluso en tareas consideradas sencillas, un fenómeno que denomina "discordancia de la dificultad". "Los modelos pueden resolver ciertas tareas complejas de acuerdo a las habilidades humanas, pero al mismo tiempo fallan en tareas simples del mismo dominio. Por ejemplo, pueden resolver varios problemas matemáticos de nivel de doctorado, pero se pueden equivocar en una simple suma", explica José Hernández Orallo, investigador del Instituto Universitario Valenciano de Investigación en Inteligencia Artificial (VRAIN) de la UPV y de ValgrAI.Esa tendencia a cometer errores en tareas que los humanos consideran sencillas "significa que no hay una 'zona segura' en la que se pueda confiar en que los modelos funcionen a la perfección", añade Yael Moros Daval, investigadora del VRAIN.Tendencia "preocupante"Otro problema es que estos modelos siempre responden a las dudas de los usuarios, aunque no tengan una respuesta clara. "Este comportamiento pretencioso, en el que dan respuesta incluso cuando son incorrectas, se puede considerar una tendencia preocupante que socava la confianza de los usuarios", añade Andreas Kaltenbrunner, investigador líder del grupo AI and Data for Society de la UOC, en una valoración también recogida por SMC España. Por eso, la investigación destaca la importancia de desarrollar modelos de IA que reconozcan sus limitaciones y se nieguen a dar respuestas si no son precisas."Aunque los modelos más grandes y ajustados tienden a ser más estables y a proporcionar respuestas más correctas, también son más propensos a cometer errores graves que pasan desapercibidos, ya que evitan no responder", resume Pablo Haya Coll, investigador del Laboratorio de Lingüística Informática de la Universidad Autónoma de Madrid (UAM), en una opinión recogida por SMC España.Estudio desactualizadoEl estudio también tiene limitaciones no menores, pues solo analiza aquellos modelos lanzados antes del verano de 2023, con lo que está desactualizado. Así, compara sistemas como GPT-3 o GPT-4, de OpenAI, pero no evalúa nuevas versiones como GPT4o o o1 (conocido como strawberry), también de OpenAI, o Llama 3, de Meta. En el caso de o1, lanzado hace dos semanas, "posiblemente sea capaz de mejorar algunos de los problemas mencionados en el artículo", valora Kaltenbrunner.Una narrativa que beneficia a las Big TechEste no es el primer estudio que cuestiona la calidad de los sistemas de IA y que pone en cuarentena el tipo de pruebas con las que se mide su rendimiento.Un informe publicado el pasado sábado –aún no revisado científicamente– refuta ese paradigma del sector que defiende que el rendimiento de la IA solo se mejora con un aumento de la escala. Según sus autores, los científicos computacionales Gaël Varoquaux, Sasha Luccioni y Meredith Whittaker (presidenta de Signal), la obsesión con el tamaño para determinar nuevos avances en la IA contribuye a disparar el presupuesto necesario para desarrollar esos sistemas, un factor beneficia a las grandes corporaciones y condena a los laboratorios universitarios a "depender cada vez más de estrechos vínculos con la industria".Esa narrativa que aboga por usar cada vez más datos para entrenar la IA, añaden, genera otros problemas menos visibles. La apuesta por modelos más grandes no solo no mejora su rendimiento, sino que contribuye a disparar la potencia computacional requerida para su funcionamiento, el consumo de energía y, por ende, su impacto climático.