Safety Standards for Children
Updated Date: August 11, 2025
USEE Limited ("USEE", "we", "us", or "our") places great importance on protecting children's safety and is committed to preventing minors from accessing inappropriate content and services through our platform. This "Safety Standards for Children" document outlines the specific measures we have developed and implemented to safeguard children, in compliance with relevant laws, regulations, and industry standards, ensuring our services align with requirements for protecting minors' physical and mental health.
By using our Services, you acknowledge and agree to the implementation of the child safety protection measures specified in this document.
1. Age Verification and Access Restriction
To strictly prevent children under the age of 13 from using our Services (clearly labeled as "18+" on Google Play), we have established a multi-layered age verification mechanism during the user registration process:
- Mandatory Date-of-Birth Collection: When users register for an account, they are required to provide their real date of birth. We do not allow users to skip this step or submit false age information.
- Real-Time Age Judgment and Blocking: Our system immediately calculates the user's age based on the provided date of birth. If the user is determined to be under 13 years old, a clear prompt stating "Registration is prohibited for children under 13" will appear on the interface, and the user will be fully blocked from proceeding with the registration process—no further steps (such as setting a password or completing a profile) will be accessible.
- Post-Registration Age Re-Check: For existing users, if we receive reports or detect indicators that a user may be a minor (e.g., inconsistent age-related information in their profile), we will re-verify the user's age by requesting additional valid identification documents (such as a government-issued ID card). If verification confirms the user is under 13, we will promptly terminate the user's account and delete relevant personal data in accordance with the "Children's Privacy" provisions of our Privacy Policy.
2. Multi-Layered Content Moderation System for Preventing Child-Related Inappropriate Content
To eliminate inappropriate material and prevent child-related content from appearing on our platform (especially in video content), we have built a comprehensive, multi-layered content moderation system that combines automated technology and manual supervision:
2.1 Automated Real-Time Scanning for All Content Types
Our system proactively scans all user-generated content in real time, covering not only common formats such as comments, profile photos, and uploaded images but also video footage (including live-streaming content and pre-recorded video uploads). The automated scanning system includes the following specific functions for child safety protection:
- Beyond filtering out content involving violence, pornography, or hate speech (which violates basic platform rules), it is specifically trained using extensive child-related image and video datasets to accurately detect and flag video clips containing child-related visuals. This includes but is not limited to: videos with minors appearing in frames, videos featuring children in inappropriate contexts (e.g., children in adult-themed scenes), and videos that may exploit or harm children.
- For flagged video content, the system immediately marks it as "high-risk content" and temporarily blocks its display to other users to prevent further spread before manual review is completed.
2.2 Dedicated Supervision and Review Protocol for Flagged Video Content
Once the automated system identifies child-related visuals in videos, it triggers an urgent alert in real time to our platform’s dedicated child safety supervision team (composed of professionally trained staff with knowledge of relevant laws and child protection). The supervision team strictly follows this review protocol:
- Timely Review Requirement: The team must initiate a comprehensive review of the flagged video content within 4 hours of receiving the alert. The review scope includes:
- Confirming whether the video contains child-related visuals and assessing the age range of the children in the video.
- Evaluating the context of the child visuals (e.g., whether the content is suitable for our 18+ user base, and whether there is any behavior that may harm children's physical and mental health, such as inducement, abuse, or inappropriate exposure).
- Cross-checking the video uploader’s age information (from registration records) to confirm if the uploader is a minor or has violated our age access policies.
- Post-Review Handling Measures:
- If the review confirms the video contains inappropriate child-related content (e.g., exploiting children, exposing children to adult themes) or the uploader is a minor, the video will be permanently removed from the platform immediately, and the uploader’s account will be temporarily suspended.
- Our team will further investigate the suspended account (e.g., checking login history, communication records, and other uploaded content) to determine if it is linked to multiple underage users or has a history of violating child safety rules. If serious violations are found, the account will be permanently suspended, and relevant information will be reported to appropriate law enforcement authorities if necessary.
- If the review confirms the video does not involve inappropriate child-related content (e.g., the person in the video is an adult who appears young, or the content shows a normal family scene with children without violations), the temporary block on the video will be lifted, and the video will be restored to normal display.
2.3 Supplementary Manual Checks and Reporting Mechanisms
To complement the automated scanning system and ensure no child-related safety risks are overlooked, we have also established the following supplementary measures:
- Regular Manual Reviews of High-Risk Content: Our supervision team conducts regular manual spot checks on high-risk content categories, with particular focus on user-uploaded videos and live-streaming content. These checks occur no less than once every 24 hours, and the sample size covers at least 10% of newly added content in high-risk categories each time.
- Dedicated Reporting Channel for Child Safety Issues: We have set up a prominent, easily accessible reporting channel within the app (both on the main interface and video playback interface) specifically for reporting underage users or child-related inappropriate content. Users (including streamers and audiences) can submit reports with one click, and each report is automatically marked as "high-priority" in our system.
- Timely Handling of Reports: Our supervision team is required to review all child safety-related reports within 24 hours of receipt. For reports verified as valid (e.g., confirming the existence of underage users or inappropriate child content), we will immediately take corresponding penalty measures (such as account suspension or content removal); for invalid reports, we will document the reason for rejection and retain the record for future reference.
3. Advancement of Age-Verification Technology
To further enhance the accuracy and effectiveness of age verification and child safety protection, we are continuously advancing our age-verification technology through big data training and model optimization:
- Optimization of Facial Recognition Models: We are developing and optimizing facial recognition models specifically for age estimation. These models are trained using large, diverse datasets of facial images (covering different ages, ethnicities, and genders) to enable accurate age estimation based on facial features (e.g., from profile photos or video frames).
- Future Application of Real-Time Age Assessment: Once this technology is mature and deployed, it will be integrated into our existing age verification system to enable real-time, accurate age assessment of users:
- For new users during registration: In addition to date-of-birth verification, the system will request the user to take a real-time facial photo (to prevent the use of fake photos), and the facial recognition model will estimate the user’s age. If there is a significant discrepancy between the estimated age and the provided date of birth (suggesting potential false age information), the user will be required to provide additional identification documents to verify their age.
- For existing users: If the system detects a mismatch between the user’s registered age and their facial characteristics (e.g., the facial recognition model estimates the user to be under 13, but the registered age is 18 or above), the user’s account will be temporarily restricted (e.g., disabling video upload and live-streaming functions) until the user provides valid identification documents to re-verify their identity and age. If re-verification confirms the user is under 13, the account will be terminated immediately.
4. Compliance with Children's Privacy Protection
All child safety protection measures outlined in this document are implemented in conjunction with the "Children's Privacy" provisions of our Privacy Policy. We strictly comply with relevant laws and regulations (such as the Children's Online Privacy Protection Act) and will not knowingly collect, use, or disclose personal information from children under 13. If we become aware that we have collected personal information from a child under 13 without parental consent (e.g., through age verification or user reports), we will take steps to remove that information from our servers within 72 hours and terminate the child's account.
5. Changes to This "Safety Standards for Children"
We may update this "Safety Standards for Children" from time to time to adapt to changes in laws, regulations, industry standards, and technological developments. When we make updates, we will notify you of the changes by:
- Publishing the new version of "Safety Standards for Children" on our official website (https://www.ole.chat/) with an updated "Last Updated" date at the top of the document.
- Sending a push notification to all users through the app (at least 7 days before the new version takes effect) to remind users to review the updated content.
We encourage you to review this "Safety Standards for Children" periodically for any changes. Your continued use of our Services after the new version of this document takes effect will constitute your acceptance of the updated content.
6. Contact Us
If you have any questions, concerns, or suggestions about this "Safety Standards for Children" or our child safety protection practices, please contact us through the following channels:
- Company Address: USEE Limited, ROOM 602, 6/F, KAI YUE COMMERCIAL BUILDING, NO.2C, ARGYLE STREET, MONGKOK KOWLOON, HONG KONG
- Email: [email protected]
- Response Time: We will respond to your inquiries within 3 business days of receiving them and provide a detailed explanation of the relevant issues.