🔍
Pixelogic Media · Arabic Linguist Team

Arabic Linguist Hub

Your single source of truth for everything Arabic localization at Pixelogic.
Style rules, QC guidelines, terminology, dispute procedures, performance benchmarks, and industry resources — all in one place, built for you and maintained by your Training Manager.

⚙️ Manage My Workspace Links
📋
Style Guide — Client Call-Outs
Client-specific rules that deviate from the Pixelogic default — covering Netflix, Amazon, Disney, Crunchyroll, Lionsgate, Paramount, and Sony.
📰
Linguistic Newsletter
Monthly decisions, resolved disputes, and agreed linguistic practices. A living reference — not a rulebook.
📚
Linguistic Instructions
General guidance, terminology decisions, and language rules that apply across all clients and productions.
📊
KPI & Thresholds
Your 2026 performance baselines — quality scores, on-time delivery targets, and competency definitions with practical examples.
⚖️
Linguistic Dispute Process
A transparent, step-by-step process for raising and resolving linguistic disagreements — with the form ready to download.
🎬
Spot QC Calculator
Generate Netflix Spot QC windows automatically from your program start timecode, runtime, and frame rate.
🔗
Resources
Curated links for dictionaries, conjugation tools, platform style guides, Arabic linguistics research, and industry awareness.
This hub is maintained by your Training Manager and updated regularly. If you notice something missing, outdated, or unclear — raise it and email omar.aamer@pixelogicmedia.com. The goal is simple: take the guesswork out of the routine, so we can pour our energy into the craft.

Style Guide — Client Call-Outs

Select a client to see rules that differ from the Pixelogic default. Select Pixelogic Default to see all baseline rules.

Linguistic Newsletter

We are introducing a monthly linguistic newsletter shared at the beginning of each month. Its purpose is to document agreed linguistic practices, recurring discussions, and resolved points that emerge through day-to-day production work. This is not a static rulebook — it is a living reference that captures consensus reached through discussion and real examples. Click any month to expand it.

Linguistic Instructions

General guidance, terminology decisions, and language rules for the Arabic Linguist Team.

Linguistic Dispute Process

Formalized guidelines for raising, reviewing, and resolving linguistic disputes — for both linguists and QCers.

📥 Download blank form

Download the blank Excel file, fill it offline, and send by email.

Linguistic disagreement is expected and normal, particularly in Arabic. This process exists to handle disputes transparently, efficiently, and with clear decision ownership — without delaying delivery.

Workflow

The four phases

1
Raise the dispute Linguist
  1. If you agree with the content change but disagree with the error category, submit the assignment first, then raise a dispute. While this may not immediately remove the error weight, it ensures internal alignment and helps prevent similar issues in future titles.
  2. If you disagree with the content change, raise the dispute in a new email thread — subject: [Original Title_Episode Number_Linguistic Dispute].
  3. Copy the assigned QCer and the Training Manager from the start.
2
Initial review QCer
  1. Review the dispute and respond as swiftly as possible.
3
Outcome — two paths QCer
QCer agrees
  • QCer returns the dispute form with agreement.
  • Linguist overturns or adjusts the flag accordingly.
  • Training Manager aligns the outcome internally for future consistency, if needed.
QCer disagrees
  • QCer returns the form with disagreement.
  • Training Manager reviews the case privately with the QCer for further discussion.
  • A unified decision is reached.
  • Training Manager communicates the final decision to the Linguist.
  • Linguist proceeds accordingly.
4
Final decision Training mgr
  1. The decision communicated by the Training Manager is final for delivery purposes.
!
Critical policy

Once overturned-flag weight is lifted from metrics and overturned flags are subject to review, any flag not raised through this process will count toward finalized linguist metrics. Objective flags should not be overturned without formally going through the dispute process.

Operational rules

Edge cases & deadlines

  • Further discussion may take place at a later stage for learning purposes — it should not block submission.
  • Linguists are encouraged to send disputes once the changelog is received from RR.
  • 6 PM deadline — disputes submitted after 6 PM are handled by the Training Manager directly; the QCer is considered unavailable.
  • Same-day urgency — if delivery is urgent and standard process is not feasible, create a Teams group chat with Territory, Training Manager, and the QCer.

Dispute handling rubric

RoleStepActionWhat & why
Translator
(Disputing)
1 Identify decision Clearly state whether you believe the QC flag is valid or invalid. Getting this wrong undermines the entire dispute.
2 Provide evidence Provide rationale with linguistic, stylistic, or contextual evidence. Reference style guides or client instructions where relevant. References make disputes and replies stronger.
3 Write professionally Write your dispute in a clear, professional, and respectful tone. Avoid emotional or vague language. Both sides must avoid casual or argumentative phrasing.
4 Ensure consistency Ensure your reasoning is consistent and persuasive — anticipate possible counterarguments. Repeated rubric application builds fairness over time.
QCer
(Replying)
1 Identify decision Confirm the translator's claim as valid or invalid based on evidence.
2 Provide evidence If invalid, provide a clear rationale with supporting evidence (guidelines, rules, examples).
3 Write professionally Write the reply in a professional, neutral, and concise manner.
4 Ensure consistency Ensure reasoning is consistent with previous QC decisions and persuasive enough to withstand scrutiny.
Raising a dispute may not always be feasible due to timing or workload — that's understood. The purpose of this process is not to add friction, but to improve alignment, mutual understanding, and clarity of judgment. A dispute should be a constructive linguistic conversation, not a point of conflict.

KPI & Baseline Thresholds

Internal performance metrics for the Arabic Linguist Team — 2026 baselines.

Baseline KPI Metrics 2026

Performance LevelOn-time Delivery (OTD)Quality % (Creation)Quality % (QC)
Below ExpectationsBelow 98%Below 92%Below 98%
Meets Expectations98.1% – 99.9%92.1% – 96.9%98.1% – 99.4%
Exceeds Expectations100%97% and above99.5% – 100%

Score Legend

ScoreLevel
1Beginner
2Intermediate
3Advanced
4Expert

Competency Bank — Definitions & Examples

Competency How-To Examples
Stress Management Staying calm, composed, and professional under pressure, especially when juggling tight deadlines, overlapping projects, or client rejections.
  • Delivering a file on time even when several projects overlap.
  • Receiving multiple rejections or revisions without defensive reactions.
  • Prioritizing tasks logically instead of panicking.
Focus on Quality Ensuring accuracy, natural flow, and compliance with style guides, not just meeting the quota.
  • Double-checking terminology, punctuation, and project instructions before delivery.
  • Refusing to "rush-through" when you know quality might drop.
  • Going the extra mile to make sure everything is in full compliance with all client specifications, glossaries, and style guides.
Customer Orientation Understanding what the client and viewer need, and tailoring output accordingly.
  • Adjusting tone and phrasing to fit the target audience.
  • Accepting valid client notes gracefully and applying them consistently.
  • Asking clarifying questions to avoid assumptions.
Self Development Continuously improving language, tools, and industry awareness.
  • Taking initiative to learn new client trends.
  • Reviewing feedback reports and implementing them in future files.
  • Joining internal sessions or asking for feedback proactively.
Integrity Being honest, transparent, and ethical in all work aspects.
  • Not sharing files or glossaries externally.
  • Admitting errors and correcting them promptly.
Accountability Taking ownership of your work, decisions, and outcomes.
  • Acknowledging a mistake instead of shifting blame.
  • Owning your assigned tasks till the end. Be the owner of one's product.
Cooperation Working effectively with peers, QCers, and managers to achieve shared goals.
  • Communicating politely and constructively in flag comments.
  • Helping teammates when they face issues.
  • Respecting cross-feedback.
Initiative Acting without waiting for instructions when you can add value or solve a problem.
  • Flagging inconsistencies or client guide updates proactively.
  • Suggesting improvements in workflow or templates.
  • Volunteering to test new features or handle complex content.
Result Orientation Aiming for both quality and delivery efficiency, focusing on outcomes, not excuses.
  • Meeting deadlines while maintaining quality threshold.
  • Tracking your KPIs and aiming higher each month.
  • Focusing on solutions instead of delays.
Adaptability Staying flexible with changing client specs, tools, or team setups.
  • Adjusting smoothly to new platforms and updates.
  • Remaining composed when workflows or assignments change suddenly.
Know-how Demonstrating solid command of tools, techniques, and linguistic judgment relevant to your role.
  • Using shortcuts, templates, and tags effectively.
  • Seeking solutions beyond what is provided to you to resolve irregular or unexpected situations.

Netflix Spot QC Calculator

Calculate Netflix minimum and Pixelogic extended spot timecodes from program start TC, TRT, and frame rate.

Invalid timecode. Use HH:MM:SS:FF (e.g. 01:00:00:00).

QCer Quality Monitor

Internal tools for the QC team — spot generators, checklists, and monthly QA workflows.

Internal Linguistic QA Framework
Arabic Localization · Pixelogic Media

This framework defines how the Arabic Linguistic Team conducts internal quality assurance on its QCers' work each month. Rather than manually and subjectively selecting files for review, this system uses a structured, data-driven approach to identify the files most likely to reveal quality risks — ensuring fair, consistent, and meaningful QA across the team.

The framework operates on a three-month cycle:

  • April: 1 Full QA + 30% Spot QA per QCer
  • May: 1 Full QA + 20% Spot QA per QCer
  • June: 1 Full QA + 10% Spot QA per QCer

Full QA means a complete linear quality check of the entire file — every subtitle reviewed against the source. Spot QA means a targeted review of selected time segments within the file, focusing on areas most likely to contain errors or missed flags.

Files are selected using a risk matrix that scores each file across four factors: PXL-SQM Score, Difficulty, Dialogue Density, and Content Type. The goal is to prioritize files where a QCer is statistically most likely to have bypassed errors or flagged inaccurately.

1
📤Export Ce5 & Ce6 Quality Metric Files
At the end of each month, export the two quality metric reports from Ce5 and Ce6 — the two QC platforms used by the team. These files contain all QC activity for the month including file details, QCer names, scores, and runtime data. No manual preparation is needed at this stage.
2
🔗Upload & Unify the Export Files
Use the File Unifier tool below to upload both Ce5 and Ce6 exports. The tool will automatically merge them into a single unified file, filtering to in-house QCers only, calculating Dialogue Density (subtitles per minute) for each title, and formatting the output consistently. Download the unified file when ready.
3
✏️Manually Add Difficulty & Exclude List
Open the downloaded unified file and fill in two columns manually before proceeding:
  • Difficulty — Rate each file as Low, Medium, or High based on linguistic complexity:
    • Low: straightforward dialogue, minimal technical or cultural content, familiar genre (e.g. simple reality TV, children's content)
    • Medium: moderate complexity, some cultural references or technical terminology, standard drama or comedy
    • High: dense dialogue, heavy cultural references, specialized terminology, complex narrative structure (e.g. legal/medical dramas, period pieces, high-density episodic productions)
  • Exclude List — Type Yes for any file to exclude from QA selection entirely (known technical issues, files reviewed outside this framework, or QCer acting in a different capacity). Leave blank if eligible.
4
🎯Upload Filled File & Get QA Selection
Use the File Selector tool below to upload the completed unified file. The tool will apply the risk matrix to score every eligible file, then automatically select one file per QCer for Full QA and the required percentage of files for Spot QA. The output is a formatted Excel report ready to use as your monthly QA plan.

Once you have your QA Selection file, use this tool to generate randomized spot check windows for each Spot QA file. Enter the program start timecode and TRT, select the frame rate, and hit Generate Spots — the tool will produce a different set of randomized QA windows each time, based on the program runtime and dialogue structure. Re-run as many times as needed until you have a distribution that works for your session.

Invalid timecode. Use HH:MM:SS:FF (e.g. 01:00:00:00).
The runtime of this file is under 16 minutes. It should undergo Full QA — no spot check needed.
Program Start
TRT
Runtime
Frame Rate
Spots
Spot QA Generator — 15 min total coverage
Spot #TC InTC OutDuration
Spots are randomized. Re-click Generate Spots for a new distribution.
📊
Upload CE5 File
.xlsx — "Query result" sheet
📊
Upload CE6 File
.xlsx — "Query result" sheet
Summary
Total from CE5
0
Total from CE6
0
After Filter
0
Breakdown by QCer
🎯
Upload Unified QA File
.xlsx — output from File Unifier

Resources

Curated reference links for the Arabic Linguist Team.

My Dashboard

Your personal QC performance dashboard. Sign in with your QCer code to view your metrics.

Admin Panel

Manage newsletter entries, linguistic instructions, and resources. After adding content, copy the generated data block and paste it into the HTML file, then re-upload to Netlify.

Write your newsletter entry using the toolbar below. Use H for section headings, 1. List for numbered points, and 📊 Insert Arabic Table to add the standard Linguistic Dispute table. Hit 👁 Preview to check how it looks before saving.
Preview
✓ Entry added! Copy the data below and update the HTML file.
Updated newsletterData — copy & paste into the HTML file:

          
        
Write a new linguistic instruction entry using the same rich editor. Use headings for sections, numbered lists for step-by-step rules, and the Arabic table if needed.
Preview
✓ Entry added! Copy the data below and update the HTML file.
Updated lingInstrData — copy & paste into the HTML file:

          
        
Add a new resource link. After submitting, copy the generated code and paste it into the resourcesData array in the HTML file.
✓ Resource added! Copy the data below.
Updated resourcesData — copy & paste into the HTML file:

          
        
To change passwords, open the HTML file in a text editor, find the TEAM_PASSWORD and ADMIN_PASSWORD constants near the top of the <script> section, update the values, save, and re-upload to Netlify.

Current team password:
Current admin password:
Diagnostic panel for the QA Bridge (Google Apps Script connection). Use the buttons below to verify the bridge is reachable and returning correct data.
— no output yet —

Tracker

Daily work log. Add tasks and hours per day. Monthly totals are used for payroll reporting.