FAccT'24 paper⚡ Liesenfeld, Andreas, and Mark Dingemanse. 2024. ‘Rethinking Open Source Generative AI: Open-Washing and the EU AI Act’. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Rio de Janeiro, Brazil: ACM. (PDF).

There is a growing amount of instruction-tuned text generators billing themselves as 'open source'. How open are they really? 🔗FAccT'24 🔗CUI'23 🔗repo

ProjectAvailabilityDocumentationAccess
(maker, bases, URL)Open codeLLM dataLLM weightsRL dataRL weightsLicenseCodeArchitecturePreprintPaperModelcardDatasheetPackageAPI
OLMo 7B Instruct✔︎✔︎✔︎✔︎✔︎✔︎✔︎✔︎✔︎✔︎✔︎✔︎~
Ai2LLM base: OLMo 7BRL base: OpenInstruct
BLOOMZ✔︎✔︎✔︎✔︎~~✔︎✔︎✔︎✔︎✔︎✔︎✔︎
bigscience-workshopLLM base: BLOOMZ, mT0RL base: xP3
AmberChat✔︎✔︎✔︎✔︎✔︎✔︎~~✔︎~~✔︎
LLM360LLM base: AmberRL base: ShareGPT + Evol-Instruct (synthetic)
Open Assistant✔︎✔︎✔︎✔︎✔︎✔︎✔︎~✔︎✔︎
LAION-AILLM base: Pythia 12BRL base: OpenAssistant Conversations
OpenChat 3.5 7B✔︎✔︎✔︎✔︎~✔︎✔︎✔︎~✔︎~
Tshinghua UniversityLLM base: Mistral 7BRL base: ShareGPT with C-RLFT
Pythia-Chat-Base-7B-v0.16✔︎✔︎✔︎✔︎✔︎✔︎✔︎~~~✔︎
togethercomputerLLM base: EleutherAI pythiaRL base: OIG
Cerebras GPT 111M Instruction~✔︎✔︎✔︎✔︎~✔︎~✔︎✔︎
Cerebras + SchrammLLM base: CerebrasRL base: Alpaca (synthetic)
RedPajama-INCITE-Instruct-7B~✔︎✔︎✔︎✔︎~~~✔︎✔︎~
TogetherComputerLLM base: RedPajama-INCITE-7B-BaseRL base: various (GPT-JT recipe)
dolly✔︎✔︎✔︎✔︎✔︎✔︎✔︎~✔︎
databricksLLM base: EleutherAI pythiaRL base: databricks-dolly-15k
Tulu V2 DPO 70B✔︎~✔︎✔︎~~~✔︎~~✔︎
AllenAILLM base: Llama2RL base: Tulu SFT, Ultrafeedback
MPT-30B Instruct✔︎~✔︎~✔︎✔︎~~✔︎~
MosaicMLLLM base: MosaicMLRL base: dolly, anthropic
MPT-7B Instruct✔︎~✔︎~✔︎✔︎~✔︎✔︎
MosaicMLLLM base: MosaicMLRL base: dolly, anthropic
trlx✔︎✔︎✔︎~✔︎✔︎~~✔︎
carperaiLLM base: various (pythia, flan, OPT)RL base: various
NeuralChat 7B~✔︎✔︎✔︎✔︎~~~~~
IntelLLM base: Mistral 7BRL base: Orca
Vicuna 13B v 1.3✔︎~✔︎~✔︎✔︎~✔︎~
LMSYSLLM base: LLaMARL base: ShareGPT
minChatGPT✔︎✔︎✔︎~✔︎✔︎~✔︎
ethanyanjialiLLM base: GPT2RL base: anthropic
ChatRWKV✔︎~✔︎✔︎~~~✔︎~
BlinkDL/RWKVLLM base: RWKV-LMRL base: alpaca, shareGPT (synthetic)
BELLE✔︎~~~~~✔︎✔︎~
KE TechnologiesLLM base: LLaMA & BLOOMZRL base: alpaca, shareGPT, Belle (synthetic)
Geitje Ultra 7B~✔︎✔︎✔︎~~~~~
Bram van RoyLLM base: Mistral 7BRL base: Ultrafeedback Dutch (synthetic)
Phi 3 Instruct✔︎✔︎✔︎~✔︎~✔︎
MicrosoftLLM base: Phi3RL base: Unspecified
WizardLM 13B v1.2~~✔︎✔︎~~✔︎✔︎
Microsoft & Peking UniversityLLM base: LLaMA2-13BRL base: Evol-Instruct (synthetic)
Airoboros L2 70B GPT4~~✔︎✔︎~~~~~
Jon DurbinLLM base: Llama2RL base: Airoboros (synthetic)
ChatGLM-6B~~✔︎✔︎~~~✔︎
THUDMLLM base: GLM (own)RL base: Unspecified
Mistral 7B-Instruct~✔︎~✔︎~~~✔︎
Mistral AILLM base: unclearRL base: unspecified
WizardLM-7B~~✔︎~~~✔︎✔︎
Microsoft & Peking UniversityLLM base: LLaMA-7BRL base: Evol-Instruct (synthetic)
Mistral NeMo Instruct~✔︎~✔︎~~✔︎
Mistral AILLM base: Mistral NeMoRL base: unspecified
Qwen 1.5~✔︎✔︎~~~✔︎
Alibaba CloudLLM base: QwenLMRL base: Unspecified
StableVicuna-13B~~~~~~~~~~
CarperAILLM base: LLaMARL base: OASST1 (human), GPT4All (human), Alpaca (synthetic)
Falcon-40B-instruct~✔︎~✔︎~~~
Technology Innovation InstituteLLM base: Falcon 40BRL base: Baize (synthetic)
UltraLM~✔︎~~✔︎~~
OpenBMBLLM base: LLaMA2RL base: UltraFeedback (part synthetic)
Yi 34B Chat~✔︎✔︎~✔︎~
01.AILLM base: Yi 34BRL base: unspecified
Koala 13B✔︎~~~~~~
BAIRLLM base: LLaMA 13BRL base: HC3, ShareGPT, alpaca (synthetic)
Llama 3.1~~~~~✔︎~
Facebook ResearchLLM base: Meta Llama 3RL base: Meta, undocumented
Mixtral 8x7B Instruct✔︎~✔︎~~~
Mistral AILLM base: MistralRL base: Unspecified
Stable Beluga 2~✔︎~~~~~
Stability AILLM base: LLaMA2RL base: Orca-style (synthetic)
Stanford Alpaca✔︎~~~~✔︎
Stanford University CRFMLLM base: LLaMARL base: Self-Instruct (synthetic)
Falcon-180B-chat~~~~~~~
Technology Innovation InstituteLLM base: Falcon 180BRL base: OpenPlatypus, Ultrachat, Airoboros (synthetic)
Gemma 7B Instruct~~~~~✔︎
Google DeepMindLLM base: GemmaRL base: Unspecified
Orca 2~✔︎~~~~
Microsoft ResearchLLM base: LLaMA2RL base: FLAN, Math, undisclosed (synthetic)
Command R+✔︎✔︎~~
Cohere AILLM base: RL base: Aya Collection
LLaMA2 Chat~~~~~~
Facebook ResearchLLM base: LLaMA2RL base: Meta, StackExchange, Anthropic
Nanbeige2-Chat✔︎✔︎~~
Nanbeige LLM labLLM base: UnknownRL base: Unknown
Llama 3 Instruct~~~~~
Facebook ResearchLLM base: Meta Llama 3RL base: Meta, undocumented
Solar 70B~~~~
Upstage AILLM base: LLaMA2RL base: Orca-style, Alpaca-style
Xwin-LM~~
Xwin-LMLLM base: LLaMA2RL base: unknown
ChatGPT~
OpenAILLM base: GPT 3.5RL base: Instruct-GPT

How to use this table. Every cell records a three-level openness judgement (✔︎ open, ~ partial or closed) with a direct link to the available evidence; on hover, the cell will display the notes we have on file for that judgement. The name of each project is a direct link to source data. The table is sorted by cumulative openness, where ✔︎ is 1, ~ is 0.5 and is 0 points. Note that RL may refer to RLHF or other forms of fine-tuning aimed at fostering instruction-following behaviour.

Why is openness important?

Open research is the lifeblood of cumulative progress in science and engineering. Openness is key for fundamental research, for fostering critical computational literacy, and for making informed choices for or against deployment of instruction-tuned LLM architectures. The closed & proprietary nature of ChatGPT and kin makes them fundamentally unfit for responsible use in research and education.

Open alternatives provide ways to build reproducible workflows, chart resource costs, and lessen reliance on corporate whims. One aim of our work here is to provide tools to track openness, transparency and accountability in the fast-evolving landscape of instruction-tuned text generators. Read more in the paper (PDF) or contribute to the repo.

If you know a model that should be listed here or a data point that needs updating, please see guidelines for contributors and feel free to open an issue or pull request in our repository.

TL;DR

Our paper makes the following contributions:

We find the following recurrent patterns:

We conclude as follows:

Openness is not the full solution to the scientific and ethical challenges of conversational text generators. Open data will not mitigate the harmful consequences of thoughtless deployment of large language models, nor the questionable copyright implications of scraping all publicly available data from the internet. However, openness does make original research possible, including efforts to build reproducible workflows and understand the fundamentals of instruction-tuned LLM architectures. Openness also enables checks and balances, fostering a culture of accountability for data and its curation, and for models and their deployment. We hope that our work provides a small step in this direction.

Papers

Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. July 19-21, Eindhoven. doi: 10.1145/3571884.3604316 (PDF).

Andreas Liesenfeld and Mark Dingemanse. 2024. Rethinking open source generative AI: open washing and the EU AI Act. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). Association for Computing Machinery, New York, NY, USA, 1774–1787. doi: 10.1145/3630106.3659005