Google
Google

ShieldGemma 2B

google/shieldgemma-2b

ShieldGemma is a series of safety content moderation models built upon Gemma 2 that target four harm categories (sexually explicit, dangerous content, hate, and harassment). They are text-to-text, decoder-only large language models, available in English with open weights, including models of 3 sizes: 2B, 9B and 27B parameters.

Community

Open Source

Context Window

8,192

0

Using ShieldGemma 2B with Python API

Using ShieldGemma 2B with OpenAI compatible API

import openai

client = openai.Client(
  api_key= '{your_api_key}',
  base_url="https://api.model.box/v1",
)
response = client.chat.completions.create(
model="google/shieldgemma-2b",
messages: [
  {
    role: 'user',
    content:
      'introduce your self',
    },
  ]
)
print(response)