Today IBM (NYSE: IBM) announced its plans to host Meta’s Llama 2-chat 70 billion parameter model in the watsonx.ai studio, with early access now available to select clients and partners.
IBM’s AI expansive reflects its AI collaboration with Meta including work with open source projects- PyTorch machine learning framework and the Presto query engine used in watsonx.data.
The move represents the IBM’s strategy of integrating third party models into its AI platform:
This will also support IBM’s strategy of offering both third-party and its own AI models. Currently in watsonx.ai, AI builders can leverage models from IBM and the Hugging Face community, which are pre-trained to support a range of Natural Language Processing (NLP) tasks including question answering, content generation and summarization, text classification and extraction.
IBM will continue to exercise trust and security principles in its generative AI models:
For instance, when users run the Llama 2 model through the prompt lab in watsonx.ai, they can toggle on the AI guardrails function to help automatically remove harmful language from the input prompt text as well as the output generated by the model. Meta also provides an account of their fine-tuning methodology used in their large language models.
Additionally, IBM Consulting employs the expertise of 21,000 data, AI and automation consultants and its Center of Excellence for Generative AI consists of more than 1,000 consultants with specialized generative AI expertise. The combined consulting expertise will assist businesses with specific AI requirements.