👋Welcome to the daios ethics engine

The daios ethics engine fine-tunes LLMs with ethical values that users control.

Overview

We create small, curated datasets with specific values, then fine-tune LLMs with those values using qLoRA.

Our methodology focuses on improving the quality of training data rather than incrementally improving optimization techniques (e.g. RLHF, DPO, IPO, Constitutional AI, etc).

The data-centric approach is championed by Andrew Ng, who argues that the real differentiator between AI businesses is training data. This concept stems from training data having the most influence over model behavior rather than changing the model itself.

We currently support Llama-2, Mixtral, and Falcon 7B. We will continue to add support for top-performing models as they are released.

Features

Currently supported:

  • Easy installation via API.

  • Access multiple ethical values fine-tuned into various open-source LLMs.

Future features:

  • Automatically sync LLMs with user behavior.

  • Give users transparency and control through a personal ethics dashboard.

  • Quick access to user feedback.

  • Observe behavior analytics.

Getting started

  • Get an access token

  • Install daios package

pip install daios-sdk
  • Use the package

from daios import Daios

# create a client
client = Daios(token="YOUR_TOKEN", model_id="courage")

query = """Write a brief Slack message to my boss telling him 
        that my coworker was unnecessarily chastised."""
response = client.completion(query, stream=True) # async by default

for chunk in response:
  print(chunk.decode(), end="")
  

... and you can start integrating ethical values into your AI right away.

Daios is currently python-only, more integrations coming up

How does it work?

We give you an API key that allows you to access all the AI models we have fine-tuned.

In your code, you write the specific values you want to access as a LoRA adaptor. If you have done some fine-tuning, our adaptor stacks on top of any existing LoRA adaptors.

Last updated