Legal

Responsible AI

1. Purpose and scope

This Responsible AI Policy describes how Webase Global sp. z o.o. designs, deploys, and operates AI-enabled capabilities in AI Smart. It applies across AI text generation, image generation, video generation, ranking/recommendation logic, and workflow automation that uses model outputs.

This policy should be read together with the Terms of Service, Privacy Policy, Data Processing Addendum, and Data Deletion policy.

2. Core Responsible AI principles

  • Human accountability: users remain accountable for decisions and publications.
  • Safety by design: controls are embedded before release, not added after incidents.
  • Transparency: users receive clear indicators that outputs are AI-generated or AI-assisted where relevant.
  • Privacy and security: AI features follow data minimization and access control principles.
  • Reliability: outputs are treated as probabilistic suggestions, not guaranteed facts.
  • Fairness and non-discrimination: we reduce harmful bias through testing and guardrails.

3. AI use cases covered in the platform

AI features in AI Smart may support content ideation, copy generation, media generation, metadata/tag generation, automation assistance, and workflow acceleration in modules and extensions.

AI outputs may be consumed directly by users, passed into approval flows, or combined with integrations (for example publishing channels) according to workspace configuration.

4. Governance and decision-making structure

AI changes are reviewed through internal product/security/compliance checkpoints. New model behavior, integrations, and automation capabilities are evaluated for legal, security, and abuse risk before broad rollout.

High-impact changes may be staged, limited by plan, limited by workspace settings, or released behind feature flags while monitoring quality and risk metrics.

5. Model/provider strategy and dependency risk

AI Smart may use third-party model providers and APIs. Provider model versions, behavior, latency, and safety systems can change over time. We may switch providers/models or update routing logic to preserve service quality, safety, and legal compliance.

We do not guarantee permanent availability of a specific model endpoint or output format.

6. Data handling in AI flows

AI requests may process prompts, contextual instructions, selected media, and related operational metadata necessary to fulfill the requested task. Data is processed under the legal and contractual framework documented in our privacy and data-processing terms.

Customers should avoid unnecessary inclusion of sensitive data in prompts unless clearly required and lawfully permitted.

7. Data minimization and purpose limitation

We design AI workflows to process only data reasonably required for requested outcomes. Access to data in AI-enabled workflows is constrained by workspace context, permissions, and service controls.

Customer data is not used for unrelated business purposes that conflict with contractual and legal commitments.

8. Human-in-the-loop and approval controls

AI output in AI Smart is intended to assist, not replace, user judgment. Users are expected to review generated text, media, and recommendations before publication or operational use.

Where features include publish/schedule flows, users remain responsible for final approvals, channel selection, legal checks, and platform-policy compliance.

9. Safety and abuse prevention controls

We implement layered controls to reduce harmful use, including authentication/session controls, quota/metering enforcement, anti-abuse logic, integration permission scoping, and policy-based restrictions on disallowed behavior.

We may block, throttle, suspend, or disable specific AI features/workspaces when abuse, policy violations, or legal risk is detected.

10. Harmful content and prohibited AI usage

The platform may not be used to generate or automate content that is unlawful, fraudulent, deceptive, abusive, malicious, discriminatory, or otherwise prohibited by law or contract.

  • Scams, phishing, impersonation, and deceptive manipulation are prohibited.
  • Malware development/distribution and exploit facilitation are prohibited.
  • Illegal content categories and prohibited targeting practices are prohibited.

11. Fairness, bias, and quality management

AI outputs can reflect model limitations and data bias. We continuously evaluate incidents, user feedback, and quality signals to improve prompt templates, validation logic, defaults, and safeguards.

Bias mitigation is an ongoing process, not a one-time guarantee. Customers should use domain-specific review where fairness risks are material.

12. Accuracy and non-reliance disclaimer

AI outputs may be incomplete, outdated, incorrect, or unsuitable for specific legal/commercial contexts. Outputs are provided "as assistance" and not as legal, financial, medical, tax, compliance, or professional advice.

Customers must independently verify facts, claims, references, and compliance-sensitive statements before use.

13. Transparency to end users and audiences

Depending on use case and applicable law/platform policy, customers may need to disclose AI-assisted creation or automation to their end users, audiences, clients, or regulators.

Compliance with disclosure requirements remains Customer responsibility.

14. Automation risk and operational safeguards

Automated workflows can amplify both value and error. Customers should configure schedules, permissions, destinations, and approval rules carefully, especially for high-volume publishing or customer-facing automations.

We provide operational controls, but customers are responsible for workflow intent and downstream impact.

15. Security and resilience of AI operations

AI feature operation follows platform security controls, including secure transport, access restrictions, auditability, and incident-response procedures. We may temporarily degrade or suspend AI features during incidents to preserve platform and user safety.

16. Monitoring, telemetry, and auditability

We collect operational telemetry and logs required to monitor reliability, enforce billing integrity, investigate abuse, and improve safety controls. Log retention follows platform retention rules and legal requirements.

17. Third-party integration and platform policy compliance

When AI output is routed to third-party platforms (for example social channels), Customer must comply with those platform terms, community rules, automation policies, and ad/publication standards.

Provider is not responsible for third-party platform enforcement actions, scope changes, or policy-based removals.

18. Intellectual property and generated output

Ownership, usage rights, and licensing of generated outputs are governed by the main agreement, provider terms, and applicable law. Customers remain responsible for infringement checks, rights clearance, and lawful downstream use.

19. High-risk and restricted decision domains

AI Smart is not intended as a sole decision engine for high-risk, legally significant, or rights-critical determinations (for example employment eligibility, credit decisions, healthcare treatment, legal entitlement adjudication).

Any use in sensitive domains requires independent human review, legal validation, and sector-specific safeguards.

20. User reporting and incident escalation

Users should report harmful, unsafe, or suspicious AI outputs through support channels. We triage and investigate reports and may apply remediation, including output filtering updates, feature restrictions, or account-level actions.

21. Continuous improvement commitments

Responsible AI controls are continuously refined through incident learnings, user feedback, quality monitoring, legal developments, and provider ecosystem changes.

We may update safety defaults, UX warnings, and governance rules as part of normal product evolution.

22. Customer best-practice obligations

  • Use role-based access and avoid shared credentials.
  • Review AI outputs before public release.
  • Avoid prompts containing unnecessary sensitive data.
  • Implement internal approval rules for high-impact automations.
  • Maintain compliance checks for advertising, consumer law, and platform rules.

23. Enforcement and account actions

Violations of Responsible AI requirements may result in warnings, feature restrictions, temporary suspension, or account/workspace termination according to the Terms of Service and applicable law.

24. Policy updates

We may update this Responsible AI Policy to reflect product capabilities, legal standards, risk controls, and provider changes. Material updates are communicated through legal pages and/or in-product notices.

25. Contact

Responsible AI and trust inquiries: legal@webase.global.

[...]