Stability AI has become famous in recent months with Stable Diffusion, an artificial intelligence imaging model. Now, she will leave for the field of writing. The company announced an open source alternative to ChatGPT called StableLM.

StableLM generates text by predicting what the next token is, what the word fragment is called. The sequence begins with information provided by a human being.
It works very similarly to GPT-4, the large language model (LLM) that serves as the basis for ChatGPT.
“Language models will form the backbone of our digital economy, and we want everyone to have a say in these designs,” says Stability AI in the blog post announcing the news. “Models like StableLM demonstrate our commitment to transparent, accessible and supportive artificial intelligence technologies.”
StableLM promises to be more efficient
For now, StableLM is in alpha phase. It was made available on GitHub, in sizes of 3 billion and 7 billion parameters. Stability AI promises that the 15 billion and 65 billion parameter models will be released soon.
Parameters are variables that a model uses to learn from training data. Smaller numbers mean that the models can be more efficient, being able to run locally on notebooks or smartphones.
On the other hand, they need more elaborate projects to be able to deliver good results using less resources.
StableLM is yet another of the large language models promising performance close to OpenAI’s GPT-3 with fewer parameters — GPT-3 uses 175 billion.
Others are Meta’s LLaMA; Stanford’s Alpaca; the Dolly 2.0; and Cerebras-GPT.
The templates are made available under the Creative Commons BY-SA-4.0 license. This means that derivative projects must credit the original author and be shared using the same license.
For now, it is possible to test a version of the model with 7 billion parameters already customized for chatbots on Hugging Face.
With information: Stability AI, Ars Technica