Misconceptions about ‘Open Source’ AI

Misconceptions about 'Open Source' AI

Open AI: Unveiling the Power and Pitfalls of Openness

chatbot

This article is a comprehensive exploration of the evolving landscape of open AI models, focusing on Meta’s ChatGPT and their recently released models, Llama and Llama 2. We will delve into the potential benefits and drawbacks of making AI more open and accessible to a broader audience.

The rise of artificial intelligence has been accompanied by an increasing demand for transparency and open access to AI models. ChatGPT, the world-famous chatbot developed by Meta (formerly Facebook), has captured the imagination of many with its ability to simulate human-like conversation. However, the intricacies of how ChatGPT functions remain shrouded in secrecy.

In recent months, efforts to make AI more “open” have gained momentum. Meta’s release of the Llama model in May was a pivotal moment, as it allowed outsiders to access the underlying code and weights that determine the behavior of the AI. In a significant move, Meta followed this up by offering Llama 2, an even more powerful model, for downloading, modification, and reuse. Consequently, Meta’s models have become the foundation for numerous companies, researchers, and hobbyists seeking to develop tools and applications with ChatGPT-like capabilities.

Meta stated, “We have a broad range of supporters around the world who believe in our open approach to today’s AI… researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do.” Meta’s commitment to openness is evident in the recent release of Llama 2 Code, a fine-tuned model explicitly designed for coding purposes, further expanding the possibilities of open AI.

The open-source approach, which has revolutionized software development by democratizing access, ensuring transparency, and enhancing security, appears poised to have a similar impact on AI. However, researchers from Carnegie Mellon University, the AI Now Institute, and the Signal Foundation caution that not all models labeled as “open” necessarily adhere to the principles of openness.

While Llama 2 is available for free download, modification, and deployment, it isn’t covered by a conventional open-source license. Meta’s license places restrictions on using Llama 2 to train other language models and mandates a special license for deployment in apps or services with more than 700 million daily users. This level of control grants Meta significant technical and strategic advantages, offering the opportunity to incorporate useful modifications made by external developers into their own applications.

In contrast, the researchers argue that models released under standard open-source licenses, such as EleutherAI’s GPT Neo, embody true openness. However, they highlight several challenges faced by these projects. Firstly, the data required to train advanced models is often concealed. Secondly, the dominant software frameworks needed to build such models, like TensorFlow (maintained by Google) and PyTorch (maintained by Meta), are controlled by large corporations. Thirdly, the exorbitant cost of training large models, reaching tens or hundreds of millions of dollars per run, places it out of reach for most developers. Lastly, the human expertise necessary to refine and improve models primarily exists within well-funded corporations.

This trajectory suggests that the immense potential of AI, one of the most important technologies in recent times, might end up benefiting only a handful of companies such as OpenAI, Microsoft, Meta, and Google. To fully realize the positive impact of AI, it must become more widely accessible.

Meredith Whittaker, president of Signal and one of the researchers involved in the study, argues that openness does not necessarily “democratize” AI. She emphasizes the need for meaningful alternatives to technology controlled by large, monopolistic corporations, especially since AI is increasingly integrated into sensitive domains like healthcare, finance, education, and the workplace. Whittaker suggests that creating conditions conducive to such alternatives is a project that can align with regulatory movements such as antitrust reforms.

Furthermore, making AI models more open to the world’s scientists is pivotal for understanding their capabilities and mitigating any potential risks associated with their deployment and further advancement. Just as the security of a code is not guaranteed by obscurity, guarding the inner workings of powerful AI models may not be the smartest approach.

In conclusion, while Meta’s open approach with ChatGPT and the release of Llama and Llama 2 has stimulated the conversation around openness in AI, researchers caution that the concept of openness might not always translate into democratization and accessibility. However, by advocating for genuine openness, supporting alternative AI projects, and implementing comprehensive regulations, we can unlock the true potential of AI, ensuring that its benefits are felt by a wide range of stakeholders.