top of page
Asian Institute of Research, Journal Publication, Journal Academics, Education Journal, Asian Institute
Asian Institute of Research, Journal Publication, Journal Academics, Education Journal, Asian Institute

Education Quarterly Reviews

ISSN 2621-5799

asia institute of research, journal of education, education journal, education quarterly reviews, education publication, education call for papers
asia institute of research, journal of education, education journal, education quarterly reviews, education publication, education call for papers
asia institute of research, journal of education, education journal, education quarterly reviews, education publication, education call for papers
asia institute of research, journal of education, education journal, education quarterly reviews, education publication, education call for papers
crossref
doi
open access

Published: 13 February 2026

Safe, Effective Use of AI‑Powered Instruments in Optometry Education (Philippines, 2025): A Policy/Practice Analysis Aligned with Philippine Privacy and Medical Device Software Regulation

Sherwin William B. Suarez

Centro Escolar University

asia institute of research, journal of education, education journal, education quarterly reviews, education publication, education call for papers
pdf download

Download Full-Text Pdf

doi

10.31014/aior.1993.09.01.621

Pages: 83-98

Keywords: Artificial Intelligence, Medical Device Software, Optometry Education, Philippines, Post‑Market Surveillance, Governance, OSCE, KPI Thresholds

Abstract

Background: Artificial intelligence (AI)-powered instruments are entering optometry teaching clinics faster than local governance frameworks can keep up. In the Philippines, recent issuances such as the National Privacy Commission (NPC) Advisory 2024-04 and the draft Food and Drug Administration (FDA) circular on medical device software (MDSW) create new obligations for educators who deploy AI tools in student-facing clinical settings. However, there is little guidance on how to translate these regulatory signals into concrete procurement terms, classroom controls, and assessment frameworks. Methods: We conducted a targeted policy synthesis (1 January–26 October 2025, Asia/Manila) focused on (1) Philippine primary instruments (NPC Advisory 2024-04, draft FDA-PH MDSW circular, DTI NAISR 2.0, NEDA AI policy note); (2) professional guidance from the Royal College of Ophthalmologists and the College of Optometrists; (3) global AI governance frameworks (WHO guidance on large multimodal models, FUTURE-AI consensus); and (4) peer-reviewed Philippine evidence on diabetic retinopathy (DR) AI and tele-ophthalmology. We used site-restricted searches for government and professional domains, PubMed/Scopus database searches, two-stage screening, and a simple 0–2 quality appraisal rubric. We mapped legal and regulatory requirements (lawful basis, DPIA, post-market monitoring, change control) to operational classroom controls, procurement clauses, and key performance indicators (KPIs) for termly validation. Findings: The synthesis yielded a hierarchy of obligations with Philippine law and regulation at the apex, supplemented by professional and global frameworks. We developed an educator-led governance model comprising: (1) contract language for AI-powered instruments; (2) a KPI set covering safety, performance stability, subgroup fairness, human-in-the-loop overrides, and data governance; and (3) OSCE-style assessment stations for AI literacy and safe use. We illustrate application through a worked change-control case for an updated AI-assisted retinal imaging device. Conclusions: AI-enabled instruments can be safely integrated into optometry education when educators assert explicit control over procurement, validation, and ongoing monitoring. This framework offers a practical, regulator-aligned blueprint for Philippine optometry schools and may be adapted to other health-profession programs facing similar pressures to adopt AI tools.

1. Introduction

 

At the Centro Escolar University (CEU) School of Optometry, the integration of artificial intelligence (AI)–enabled instruments has moved from concept to clinic. Over the past academic terms, our teaching clinics began using AI‑assisted tools in routine eye tests and in screening for ocular abnormalities. As a faculty member and clinical instructor, I have seen—at the level of the exam lane and OSCE station—how these systems can accelerate workflows, standardize image quality, and surface decision cues that would otherwise demand specialist time. When used with appropriate governance, AI does not replace clinical judgment; it sharpens it.

 

This policy‑practice paper is therefore written from the vantage point of an educator responsible for patient safety, learner competency, and service efficiency. In my teaching practice, AI‑generated outputs—whether an automated image‑quality flag on a fundus photograph or a suggested classification on an OCT scan—have been most valuable when they produce results that are (1) accurate, (2) fast, and (3) reliable across diverse patients and devices. The promise is clear: shorter capture times, fewer repeat tests, earlier detection, and richer feedback for students. The responsibility is equally clear: we must evidence these benefits locally, monitor for drift and subgroup gaps, and retain faculty‑in‑control of clinical decisions.

 

The Philippine regulatory environment is evolving quickly—anchored by the National Privacy Commission’s Advisory 2024‑04 on generative AI and the Food and Drug Administration–Philippines’ draft circular on medical device software—while global health guidance (e.g., WHO on large multimodal models) and professional bodies provide additional guardrails. Against this backdrop, optometry schools need operational guidance that translates policy into classroom and clinic controls. What follows is a targeted policy synthesis and implementation framework tailored to CEU’s teaching context but generalizable to similar programs, emphasizing lawful deployment, performance validation, equity, and change control.

 

Specifically, this article contributes: (a) a transparent, reproducible methodology prioritizing Philippine primary sources; (b) a comparative regulatory snapshot (PH vs regional/global anchors) to justify procurement and update requirements; (c) an expanded evidence base across AI‑instrument classes with subgroup metrics for equity checks; and (d) an evaluation framework with key performance indicators (KPI), thresholds, and OSCE rubrics to embed AI‑literacy behaviors into training. The goal is straightforward: to help faculty deliver patient‑safe, educator‑led AI adoption that measurably improves learning outcomes and clinic performance in Philippine optometry education.

 

2. Methodology

 

Design & window: Targeted policy synthesis (1 Jan–26 Oct 2025, Asia/Manila). Sources: (a) Philippine primary documents—NPC Advisory 2024‑04; FDA‑PH draft MDSW circular; DTI NAISR 2.0; NEDA AI policy note; (b) Professional guidance—Royal College of Ophthalmologists; College of Optometrists; (c) Frameworks—WHO guidance on large multimodal models; FUTURE‑AI consensus; (d) Peer‑reviewed Philippine evidence on diabetic retinopathy (DR) AI and tele‑ophthalmology. Search & selection: site‑restricted queries (e.g., site:privacy.gov.ph, site:fda.gov.ph) and PubMed/Scopus keywords (e.g., “diabetic retinopathy AND Philippines AND artificial intelligence”). Inclusion: official PH documents and peer‑reviewed items on AI in health/ophthalmology/education. Exclusion: opinion pieces without citations, non‑PH press. Extraction & synthesis: we abstracted legal/regulatory requirements (lawful basis, DPIA, post‑market, change control) and mapped them to operational classroom controls and procurement clauses; conflicts were resolved in favor of Philippine law/regulation. Limitations: not a systematic review; evolving FDA‑PH circular; mitigated by prioritizing primary documents and date‑stamping searches.


 

3. Eligibility criteria

 

Inclusion: (a) primary Philippine legal/regulatory/government artifacts (advisory, circular/guideline, strategy note) on AI/automated decision systems, medical device software, health data, or educational/clinical use; (b) professional guidance from recognized authorities (RCOphth; College of Optometrists); (c) peer‑reviewed empirical studies conducted in the Philippines (preferred) or ASEAN when PH data absent; (d) main window 1 Jan 2025–26 Oct 2025 with seminal pre‑2025 documents retained if in force.

Exclusion: non‑documented opinions, news/blog posts, unreferenced commentary; vendor marketing without independent evaluation; non‑PH government documents unless used for explicitly labeled comparative policy benchmarking.

 

4. Screening & selection

 

Level 1 (titles/headers): one reviewer screened all hits. Level 2 (full text): two reviewers assessed eligibility; disagreements resolved by consensus, applying a “Philippine law/regulator primacy” rule. A selection log captured full‑text exclusion reasons.

 

5. Data extraction

 

Regulatory/policy: authority, legal force (law/advisory/draft), scope, obligations (lawful basis, DPIA, consent, post‑market surveillance, change control), enforcement/remedy, and currency.

 

Empirical studies: setting, instrument/task, dataset provenance (local vs external), reference standard, sample size, primary outcomes with CIs, subgroup performance, regulatory status, post‑deployment monitoring.

Operational mapping: each requirement was mapped to classroom/clinic controls and procurement clauses (configuration logging, override audit, acceptance testing, termly validation).

 

Table 1: Quality appraisal rubric (0–2 scale: No/Partial/Yes; critical items ★)

Domain

Item

Critical

Score (0–2)

Regulatory/government

Authority & legal force

 

Regulatory/government

Currency (in force; draft status disclosed)

 

Regulatory/government

Clarity & operational specificity

 

 

Regulatory/government

Scope alignment (health/device/education)

 

 

Regulatory/government

Enforcement/oversight

 

 

Professional guidance

Issuing body credentials

 

Professional guidance

Evidence basis & citations

 

 

Professional guidance

Applicability to PH context

 

 

Professional guidance

Implementation detail (workflows/audit)

 

 

Empirical studies

Risk of bias (QUADAS‑2 adapted)

 

Empirical studies

Dataset provenance & spectrum

 

 

Empirical studies

Performance reporting (AUC/Sn/Sp with CIs)

 

 

Empirical studies

Deployment realism (prospective/quality controls)

 

 

Empirical studies

Post‑market/monitoring (drift/incidents)

 

 

Global frameworks

Alignment with safety/ethics pillars

 

 

Global frameworks

Translational guidance

 

 

Global frameworks

Consistency with PH obligations

 

 

 

6. Synthesis approach

 

Directed content analysis: obligations/safeguards from primary sources were coded to a taxonomy (lawful basis, DPIA, consent/assent, validation, update control, post‑market surveillance, logging/auditability, RBAC, pedagogy/assessment). Codes were mapped to operational controls and procurement clauses. Conflicts favored Philippine requirements; gaps bridged with WHO LMM and FUTURE‑AI principles as international best‑practice.


7. Limitations


This study has several limitations. First, it is a targeted policy synthesis rather than a full systematic review, and we may have missed relevant international or regional documents outside our predefined domains. Second, the KPI thresholds and governance processes, while informed by existing evidence and local operational experience, are still partly normative and require further empirical validation. Third, the worked change-control case is drawn from a single teaching clinic context and may not fully reflect the constraints of under-resourced or differently structured institutions. Finally, Philippine regulatory instruments cited here, particularly the draft FDA-PH circular on medical device software, are subject to change; institutions adopting this framework will need to monitor regulatory updates and adjust accordingly.

 

8. Conclusions


AI-powered instruments are no longer optional novelties but emerging infrastructure in optometry education and practice. In the Philippine context, educators cannot outsource governance of these tools to vendors or generic institutional policies alone. By aligning with national law and regulation, professional guidance, and global frameworks, and by embedding clear KPIs, change-control processes, and OSCE-based assessment into routine operations, optometry teaching clinics can integrate AI in ways that are safe, transparent, and educationally meaningful. The framework presented here offers a practical starting point that can be adapted, stress-tested, and progressively strengthened as the regulatory and technological landscape evolves.


9. Findings: Issue Overview

 

Governance in teaching clinics is under‑specified: classroom use requires explicit role definitions (AI assistive only; faculty accountable), AI‑specific DPIA, and logging (NPC, 2024). Regulatory expectations for MDSW are evolving, creating procurement risk if tools are not regulatory‑ready (FDA‑PH, 2025). Philippine studies demonstrate feasibility of handheld/point‑of‑care imaging and tele‑ophthalmology but underscore the need for local validation and performance monitoring (Salongcay et al., 2024; Arcena et al., 2024; Azarcon et al., 2021; Daza et al., 2022).

 

10. Policy Recommendations

 

10.1. Governance and Accountability

 

1) Faculty‑in‑control rule: AI outputs (quality flags, structured observation prompts) are suggestive only; supervising faculty make and communicate all clinical judgments. Document faculty sign‑off in the learning record (RCOphth, 2024; College of Optometrists, 2025).

2) DPIA + transparency: Complete an AI‑specific DPIA and publish a patient/learner‑facing notice describing tools, data flows, oversight, and rights; apply data minimization and PETs (NPC, 2024).

3) Configuration control & logging: Maintain a configuration register (features on/off, model version, faculty) and log AI–human disagreements/overrides; export logs monthly for QA (WHO, 2025; FUTURE‑AI, 2025).

4) Bias & performance monitoring: Run a mini local validation each term (image‑quality pass rate, failure modes, subgroup review) and document corrective actions (FUTURE‑AI, 2025).

5) Assessment integrity: For non‑AI OSCEs, lock diagnostic suggestions; for AI‑literacy OSCEs, evaluate safe‑use behaviors (recognizing drift, appropriate override, privacy compliance).

 

10.2. Procurement and Regulatory Readiness

 

1) Evidence dossier (bid requirement): intended use (education/assistive), regulatory roadmap for FDA‑PH MDSW, validation summaries with subgroup metrics, post‑market plan, security/privacy whitepaper, and change‑management policy.

2) Contract clauses: (a) Regulatory‑ready warranty—vendor to comply with FDA‑PH MDSW; (b) Model update control—advance notice, change log, deferred update and rollback, local re‑verification; (c) Post‑market—incident portal, 72‑hour safety notice, patch SLAs; (d) Data protection—PH residency where feasible, de‑identification for teaching, no secondary use without consent; (e) Audit & exit—export and verified deletion.

3) Acceptance testing: Sandbox dry‑run (lockouts, logging, audit trails) and pilot with a local sample before classroom scale‑up.

4) Security & privacy controls: SSO with RBAC, per‑user audit trails, encryption, anonymization pipeline, retention timer (NPC, 2024).

5) Costing & sustainability: include training, DPIA, validation, log storage, and post‑market support in total cost of ownership; negotiate education pricing and an exit ramp.


10.3. Comparative Regulation Snapshot: Philippines vs. Regional/Global Anchors

 

Purpose: to justify procurement clauses, update/change-control requirements, and post‑market monitoring by benchmarking the Philippines against at least two mature jurisdictions. Where conflicts exist, institutional policy defaults to Philippine law/regulator requirements. Entries below reference primary, canonical publications (list provided after the table).

 

Table 2: Regulation Snapshot of Philippines vs. Singapore, Malaysia & EU

Regulatory dimension

Philippines (FDA‑PH / NPC)(status: MDSW circular – draft; NPC Advisory 2024‑04)

Singapore (HSA)

Malaysia (MDA)

European Union (EU MDR / GDPR)

Legal basis & scope

Medical Device Act + FDA‑PH draft circular for Medical Device Software (MDSW/SaMD); privacy governed by Data Privacy Act (DPA) and NPC advisories.

Health Sciences Authority regulates SaMD under medical device regulations; PDPA governs personal data; sectoral notices for health data.

Medical Device Authority regulates SaMD under Malaysian medical device regulations; PDPA 2010 governs personal data; health data guidance via MOH/MDA circulars.

EU MDR 2017/745 classifies medical device software; GDPR governs personal data including special‑category health data.

Software classification

Draft circular aligns with risk‑based classification; clinical purpose determines class; accessories and standalone software covered.

Risk‑based classification aligned to IMDRF; standalone software covered; intended use drives class.

Risk‑based classification aligned to IMDRF; standalone software covered; intended use drives class.

MDR classification rules (esp. Rule 11) for software; many diagnostic/decision‑support apps fall into higher risk classes.

Pre‑market route

Conformity to essential principles; registration/notification route per risk class (details to be finalized in final circular).

Conformity assessment per risk class; documentation includes clinical/performance evidence and cybersecurity/Usability files.

Conformity assessment per risk class; technical documentation and clinical/performance evidence required.

CE marking via conformity assessment with notified bodies for higher classes; clinical evaluation and post‑market plans required.

Post‑market surveillance (PMS) & vigilance

PMS, incident reporting, and field safety corrective actions expected; specifics to be finalized; NPC requires breach notification under DPA.

Mandatory PMS and vigilance reporting; cybersecurity incident handling expected; PDPA data breach notification requirements apply.

Mandatory PMS and vigilance reporting; PDPA 2010 breach handling requirements; MOH guidance may specify timelines.

PMS and vigilance per MDR/IVDR; periodic safety update reports (PSUR) for certain classes; GDPR breach notification timelines apply.

Change control & model updates (AI/ML)

Draft circular anticipates change‑management obligations; institutions should require vendor change logs, versioning, and re‑validation; DPIA updates per NPC 2024‑04.

HSA recognizes algorithm change control consistent with IMDRF; significant changes may require prior assessment; institutions should maintain update/rollback plans.

MDA follows IMDRF principles; significant software changes may trigger re‑assessment; institutional acceptance testing recommended.

EU MDR + MDCG guidance: significant software changes can alter conformity; PCCP‑like approaches emerging; re‑assessment and documentation required; DPIA per GDPR for high‑risk processing.

Real‑world performance / drift monitoring

Termly (or defined interval) validation recommended; incident & override logs; data‑minimization and role‑based access per DPA/NPC.

Post‑market performance follow‑up recommended; capture quality metrics; maintain audit trails.

Post‑market performance follow‑up recommended; maintain audit trails and incident logs.

Post‑market clinical follow‑up (as applicable); PSUR; field performance metrics; logging and auditability emphasized.

Data protection & cross‑border transfer

DPA lawful basis + DPIA for high‑risk processing; cross‑border transfer subject to adequate safeguards and contracts; student data treated as sensitive.

PDPA lawful purpose/consent exceptions; cross‑border transfer allowed with comparable protection measures/contractual clauses.

PDPA 2010 governs processing; cross‑border transfer principles apply; contractual safeguards required.

GDPR legal bases; special‑category data rules; cross‑border transfers require adequacy/appropriate safeguards (SCCs etc.).

Education/teaching‑clinic use

Explicitly align deployments with DPA/NPC; designate faculty‑in‑control; restrict automated decisions; privacy notices to students/patients.

Institutional governance expected; align with PDPA and HSA guidance; document educational use and supervision.

Institutional governance expected; align with PDPA 2010 and MDA guidance; document supervision and scope.

Institutional governance expected; GDPR transparency; ensure MDR compliance for clinical use even in training settings.

* Notes: The Philippines MDSW circular is currently a DRAFT; final text will supersede placeholders here. Singapore HSA and Malaysia MDA align closely with IMDRF SaMD principles. EU MDR Rule 11 commonly elevates the class of diagnostic/decision‑support software. Institutions should default to the most stringent applicable requirement when procuring multi‑site or cross‑border systems.


10.4. Evaluation Framework: KPIs, Thresholds, and OSCE Rubrics

 

This framework specifies institution‑level key performance indicators (KPIs) with explicit thresholds, monitoring cadence, and ownership; and a competency‑based OSCE rubric to evaluate AI‑literacy behaviors in teaching clinics. KPIs align with the Methodology’s operational mapping (validation, update control, post‑market surveillance, privacy compliance, equity).

 

 

Table 3: KPI Catalog (institutional monitoring)

Domain

Metric (definition)

Target / Threshold

Frequency

Owner

Data source / collection

Trigger & corrective action

Safety & quality

Image‑quality pass rate (% encounters passing automated/standard QC on first attempt)

≥ 90% pass; alert if < 85% for 2 consecutive weeks

Weekly dashboard; termly review

Clinic Lead; Imaging Supervisor

Device logs; QC exports; random audit 5% cases

Targeted re‑training for operators; adjust capture protocols; vendor ticket if systemic

Safety & quality

Override rate (% AI outputs overruled by clinician with documented rationale)

2–10% expected; alert if <1% (over‑reliance) or >15% (poor model fit)

Weekly; termly trend

Service Head; QA Committee

EHR decision log; AI middleware audit logs

Case review; threshold tuning; local re‑validation

Safety & quality

Incident rate (AI‑related near misses/adverse events per 1,000 encounters)

< 1 / 1,000; zero high‑severity without immediate containment

Immediate notification; monthly roll‑up

Risk Manager; DPO (for privacy incidents)

Incident system; root‑cause analysis forms

CAPA within 14 days; report to regulator if required

Performance validation

Local AUC/Sn/Sp (or MAE for biometry) vs. baseline

Within 2 pp (AUC/Sn/Sp) of baseline; MAE ≤ baseline + 0.05 D

Termly (or post‑update)

Model Steward; Faculty‑in‑control

Validation set; stratified by device/vendor/site

If breached: freeze updates; rollback; re‑tune/collect local data

Equity & generalizability

Subgroup gap (max Δ vs. overall)

Gap < 10 pp (Sn/Sp/AUC) or < 0.10 D (MAE)

Termly; post‑update

Equity Lead; QA Committee

Subgroup table; confidence intervals

Mitigate: data enrichment; threshold per subgroup; vendor escalation

Privacy & compliance

DPIA currency and control execution (%)

100% of required controls executed; DPIA updated per major change

Quarterly; on change

Data Protection Officer (DPO)

DPIA register; change‑control log

Block deployment until DPIA updated; retrain staff

Change control

Update acknowledgement latency (days from vendor release to institution sign‑off or deferral)

≤ 7 days for security patches; ≤ 30 days for functional updates

Per release

IT/Clinical Engineering; Model Steward

Vendor change log; ticketing system

Escalate to Steering Committee; risk acceptance record

Operations & training

OSCE pass rate on AI‑literacy stations (%)

≥ 85% overall; no critical fail on any station

Per OSCE cycle

Course Director; Clinical Preceptors

OSCE sheets; inter‑rater reliability (κ)

Remediation plan for candidates; calibrate raters if κ < 0.7

Threshold logic: KPI breaches trigger documented corrective actions (CAPA). Equity and performance breaches require immediate local re‑validation; privacy/compliance breaches halt deployment until resolved. All termly validations are archived with version hashes of models/configs.

 

Table 4: OSCE Rubrics for AI‑literacy Behaviors

Scoring scale: 1–5 (1 = Unsafe/Absent, 3 = Competent, 5 = Exemplary). Candidates must score ≥3 on all critical items (★) and ≥85% aggregate. Stations simulate real clinic workflows with AI‑assisted instruments. Inter‑rater reliability target κ ≥ 0.7.

Station 1 — Image Quality Triage & Capture (critical: QC; privacy)

Behavior

1–2 (Below safe)

3–4 (Competent)

5 (Exemplary)

Applies QC protocol ★

Skips QC; proceeds with poor signal/noise

Runs QC; repeats capture until pass; documents failures

Anticipates artifacts; coaches patient/operator to optimize first‑pass success

Handles privacy/consent ★

No consent or generic statements

Explains AI‑assist; obtains consent/assent; anonymizes per SOP

Tailors consent to scenario; verifies minimal data capture; logs any deviations

Logs capture context

No logs

Enters device, camera type, field protocol

Adds vendor/firmware; flags atypical conditions for validation

 

Station 2 — AI Output Appraisal & Override (critical: clinical reasoning; override rationale)

Behavior

1–2 (Below safe)

3–4 (Competent)

5 (Exemplary)

Interprets AI output ★

Accepts output at face value

Cross‑checks with clinical signs; considers pretest probability

Explains limitations/calibration; integrates uncertainty and context

Override decision ★

Overrides without rationale or never overrides

Overrides when discordant; documents structured rationale

Anticipates failure modes; proposes follow‑up testing

Communicates to patient

Jargon; no shared decision

Plain‑language explanation; shares next steps

Uses teach‑back; provides written after‑care notes

 

Station 3 — Change Control & Validation Review (critical: update risk; documentation)

Behavior

1–2 (Below safe)

3–4 (Competent)

5 (Exemplary)

Reads vendor change log ★

Ignores/skim read; misses significant change

Identifies change scope; checks for required re‑validation

Maps change to local risk profile; proposes rollback plan

Validates post‑update ★

Uses old validation; no stratification

Runs termly/local validation; reviews subgroup table

Expands validation to new edge cases; coordinates cross‑site comparison

Records decisions

No record

Signs off or defers with justification

Links decision to KPI dashboard; schedules follow‑up audit

 

Station 4 — Incident Reporting & CAPA (critical: safety; timeliness)

Behavior

1–2 (Below safe)

3–4 (Competent)

5 (Exemplary)

Identifies incident severity ★

Misclassifies; delays containment

Classifies severity; contains; informs lead

Preempts escalation; initiates interim safeguards

Completes report ★

Incomplete/inaccurate

Complete with timestamps and context

Includes preliminary root‑cause; proposes CAPA

Implements CAPA

No follow‑through

Executes assigned CAPA within SLA

Verifies effectiveness; updates SOPs/training

 

Station 5 — Privacy, DPIA & Data Governance (critical: DPA/NPC compliance)

Behavior

1–2 (Below safe)

3–4 (Competent)

5 (Exemplary)

Identifies lawful basis ★

Incorrect/none

Correctly identifies basis; links to notice/consent

Addresses special cases (minors/teaching); ensures minimal data

Executes DPIA controls ★

Misses required controls

Checks controls executed; logs residual risk

Proposes control enhancements; aligns with update/change

Manages cross‑border transfer

Unprotected transfer

Uses approved clauses; documents purpose

Adds encryption at rest/in transit; verifies vendor adequacy

Passing criteria: aggregate ≥ 85% AND no critical (★) item below 3 on any station. Rater calibration: conduct a 10‑case calibration; compute κ; if κ < 0.7, retrain and re‑assess before summative OSCE. Archive OSCE sheets and link to the KPI dashboard.

 

Abbreviations

1.     AI – Artificial Intelligence

2.     DPIA – (Data) Privacy Impact Assessment

3.     DTI – Department of Trade and Industry

4.     FDA‑PH – Food and Drug Administration Philippines

5.     FUTURE‑AI – International consensus guideline for trustworthy medical AI

6.     LMM – Large Multimodal Model

7.     MDSW – Medical Device Software

8.     NEDA – National Economic and Development Authority

9.     NPC – National Privacy Commission

10.   OSCE – Objective Structured Clinical Examination

11.   PETs – Privacy‑Enhancing Technologies

12.   REACH‑DR – Remote Retinal Evaluation Collaboration in Health – Diabetic Retinopathy

13.   WHO – World Health Organization.

 

 

Funding: None declared.

 

Conflicts of Interest: The author declares no competing interests.

 

Ethics: This synthesis used publicly available documents and published studies and did not involve individual-level data collection. Institutional research ethics approval was therefore not required. For the worked change-control example, all operational details were de-identified and presented as an illustrative case.

 

Declaration of Generative AI and AI-assisted Technologies: This study has not used any generative AI tools or technologies in the preparation of this manuscript.

 

 

 

References

  1. National Privacy Commission. (2024, December 19). Advisory No. 2024‑04: Guidelines on the application of the Data Privacy Act of 2012 to artificial intelligence systems processing personal data. https://privacy.gov.ph/wp-content/uploads/2025/02/Advisory-2024.12.19-Guidelines-on-Artificial-Intelligence-w-SGD.pdf

  2. Food and Drug Administration Philippines. (2025, May). Draft circular: Guidelines on the regulation of medical device software (MDSW).https://www.fda.gov.ph/wp-content/uploads/2025/05/Draft-FDA-Circular-FDA-Medical-Device-Software.pdf

  3. Department of Trade and Industry. (2024, July 3). National AI Strategy Roadmap 2.0 (NAISR 2.0). https://digitalpolicyalert.org/event/23330-adopted-dti-national-ai-strategy-roadmap-20

  4. National Economic and Development Authority. (2025, February). Policy note on artificial intelligence. https://depdev.gov.ph/wp-content/uploads/2025/02/Policy-Note-on-Artificial-Intelligence.pdf

  5. World Health Organization. (2025, March 25). Ethics and governance of artificial intelligence for health: Large multi‑modal models—WHO guidance. https://www.who.int/publications/i/item/9789240084759

  6. Royal College of Ophthalmologists. (2024, May). Position statement: Artificial intelligence in ophthalmology. https://www.rcophth.ac.uk/wp-content/uploads/2024/05/240521-Position-statement-artificial-intelligence.pdf

  7. The College of Optometrists. (2025, June 9). Interim position on the use of artificial intelligence in eye care. https://www.college-optometrists.org/policy-and-influencing/position-statements/interim-position-on-the-use-of-ai

  8. Lekadir, K., et al. (2025). FUTURE‑AI: International consensus guideline for trustworthy AI in health care. BMJ, 388, e081554. https://www.bmj.com/content/388/bmj-2024-081554

  9. Salongcay, R. P., et al. (2024). Accuracy of integrated artificial intelligence grading using handheld retinal imaging in a community diabetic eye screening program. Ophthalmology Science, 4(3), 100341. https://www.ophthalmologyscience.org/article/S2666-9145(23)00189-6/fulltext

  10. Arcena, L. V. L., et al. (2024). Automated machine learning for referable diabetic retinopathy using handheld retinal images in a community‑based screening program. Philippine Journal of Ophthalmology, 49(2), 139–147. https://paojournal.com/index.php/pjo/article/view/523

  11. Azarcon, C. P., Ranche, F. K. T., & Santiago, D. E. (2021). Tele‑ophthalmology practices and attitudes in the Philippines in light of the COVID‑19 pandemic: A survey. Clinical Ophthalmology, 15, 1239–1247. https://www.dovepress.com/article/download/63316

  12. Vega, A. A. C., et al. (2021). Knowledge, attitudes, and practices of telemedicine in ophthalmology in a Philippine tertiary hospital. Philippine Journal of Ophthalmology, 46(1), 21–27. https://paojournal.com/index.php/pjo/article/view/70/59

  13. Daza, J., Sy, J., Rondaris, M. V., & Uy, J. P. (2022). Telemedicine screening of the prevalence of diabetic retinopathy among type 2 diabetic Filipinos in the community. Journal of Medicine, University of Santo Tomas, 6(2), 814–824. https://doi.org/10.35460/2546-1621.2022-0024

bottom of page