KL3M

The first LLM to receive the Fairly Trained certification.

Beneficiary
Date
Work
The first LLM to receive the Fairly Trained certification.

In collaboration with:

KL3M Homepage

A New Chapter for KL3M


The ALEA Institute is proud to steward KL3M, the first Fairly Trained large language model family, for public benefit.

Originally developed by our team under 273 Ventures LLC in Q4 2023 and Q1 2024, KL3M was donated to ALEA in August 2024.

The ALEA Institute is committed to maintaining and further developing KL3M, with the goal of promoting its adoption and use as the gold standard for legal and ethical AI model development.

As an open data, open source, open weights model under its new stewardship, we invite you to learn from or build on KL3M for yourself.

Information about KL3M


KL3M is notable for several key reasons:

  • Clean training data: KL3M was trained on a high-quality, curated dataset that is free from copyright issues, toxic sources, and synthetic data generated by other models.
  • Low perplexity: KL3M achieves state-of-the-art perplexity scores on various tasks, including legal domain data.
  • Low toxicity: KL3M has been designed to minimize toxicity, with a significantly lower rate of “bad” words and slurs compared to other models.
  • Efficient and accessible: KL3M is built to run efficiently on consumer-grade hardware, making it accessible to organizations of all sizes.

More information is available at the KL3M homepage.

Contact us

Want to talk or collaborate?

Don't be shy. We'd love to hear from you.

Subscribe


News and Updates from the ALEA Institute.