Prologue
In 2025, the evolution and societal implementation of Generative AI (GenAI) are bringing new horizons to the utilization of information while demanding a different level of data security responsibility from companies. From the perspective of international certification holders such as CISSP, CCSP, and CISM in the AI utilization field, the thorough implementation of "transparency," "accountability," "ethics," and "compliance with regulations"—in other words, comprehensive security enhancement supporting a data-driven society—can be considered the core of true competitiveness.
From 2024 to 2025, the volume of data transmitted via AI applications worldwide has increased 30-fold year-on-year, complicating new risks such as confidential information leaks, shadow IT, AI model theft, and copyright and privacy violations. Additionally, the architecture of Generative AI is premised on a dual structure of "external API/cloud-based + learning-based large-scale models." It is practically difficult to fully grasp where and how user input business data, personal information, and intellectual property for AI services is stored, learned, and reused. Therefore, the risk of immediate leakage of user input data due to erroneous input is becoming prominent. When using external services, it is necessary to scrutinize in advance which stage of data may remain on the cloud or in API communications and whether they may be reused as learning data. If these cannot be controlled by contracts, it is necessary to reconsider usage or introduce guardrails.
This article first outlines the unique risks associated with the Generative AI era. It then provides a clear and structured explanation of the latest global regulations and governance frameworks, corresponding data security measures, utilization of advanced infrastructure, organizational governance, talent development and education, and future trends. Each chapter emphasizes how they are connected and can be practically implemented in the field, providing a comprehensive overview and effective approach to information security.
- Prologue
- 1. The Value and Fundamental Risks of Data in the Age of Generative AI
- 2. Legal Regulations, AI Governance, and Accountability
- 3. Comprehensive Framework for Data Security Measures
- 4. Utilization of the Latest Data Governance Platforms (e.g., Microsoft Purview)
- 5. Security Operations, Education, Organizational Governance, and Continuous Evolution
- 6. Future Trends and the Age of Offensive and Defensive AI—Challenges and Winning Strategies
- Epilogue
1. The Value and Fundamental Risks of Data in the Age of Generative AI
1-1. The Essential Value of Data and Generative AI
The core of Generative AI is the "collection, learning, generation, and reuse of diverse data." This not only improves automation and information processing capabilities but also accelerates business decision-making, enhances creative quality, and differentiates services. In fact, its use is advancing across a wide range of industries, including text, images, audio, business documents, customer data, and IoT data, rapidly raising the level of data governance and security as an information infrastructure.
1-2. Deep Risks and Incidents Unique to Generative AI
The spread of Generative AI has brought about qualitatively different risks compared to traditional IT. The main ones are as follows:
- Confidential information leaks (due to external AI tools, API, cloud misconfigurations, etc.)
- Prompt injection, Poisoned RAG (information theft and tampering due to malicious input)
- Expansion of governance-free zones due to shadow AI/unapproved AI use
- Insufficient access rights and classification in AI utilization sites, inappropriate data distribution management
- LLM vulnerabilities, backend API tampering, supply chain attacks
- AI account hijacking, information-spreading malware
- Deepfakes, automation of fake information distribution and malware generation
- Security deficiencies in information retrieval using RAG/vector DB
In 2023, there were numerous complex incidents, such as the large-scale personal information leak of ChatGPT and database/API misconfiguration incidents at domestic IT companies.
2. Legal Regulations, AI Governance, and Accountability
2-1. Progress of Global AI Regulations
New AI-specific rules are being rapidly implemented worldwide, such as the EU AI Act, US Presidential Executive Orders, and China's unique regulations. The main requirements are explainability, traceability (auditability), accountability for AI outputs, and governance systems for managing learning data.
In Canada and the EU, AI behavior audits and output traceability have been institutionalized, and failure to implement advanced logging and accountability can result in regulatory violations and loss of trust from business partners and consumers.
2-2. Japan's Copyright and Privacy Legislation
Under domestic personal information protection laws and GDPR (General Data Protection Regulation), anonymization, masking, compliance with privacy policies, and log audits before and after learning are mandatory. Generative AI easily mixes personal information in the learning, output, and generation processes, making it difficult to detect and prevent incidents.
Regarding copyright, although there is an exception for data use for AI learning purposes under Article 30-4 of the Copyright Act, there are many restrictions in practical use. Especially for " enjoyment-type AI services," multi-layered measures such as explicit permissions, contract management, internal AI guidelines, legal reviews, and human reviews are indispensable. In fact, there are many disputes over intellectual property, such as character copyright lawsuits in the US and China.
3. Comprehensive Framework for Data Security Measures
3-1. The Essence of Data Governance
To advance governance effectively, real-time visualization and automatic control of "which data is where, who is accessing/using it, when, and how" and standardization of countermeasure operations are essential. Especially in the era of cloud/on-premises/hybrid environments and the expansion of shadow IT, cross-functional integrated management such as data classification, access history monitoring, and automatic labeling is key to establishing trust.
3-2. Basic Principles and Specific Measures
3-2-1. General Data Security Measures (Defensive Layer)
- Classification and labeling according to data confidentiality (active use of AI/Microsoft Purview for automatic determination)
- Thorough access control and the principle of least privilege (role-based/privilege management/separation of administrator privileges)
- Protection of stored and transmitted data through strong encryption (including end-to-end principles)
- Multi-layered defense with DLP (Data Loss Prevention) and audit logs
- Prevention of misdelivery, automation in eDiscovery linkage with audit trails
- Regular data inventory and asset visualization
- Thorough implementation of zero-trust networks (defense model that does not trust any system and constantly verifies)
- Introduction of multi-factor authentication, EDR, and MDR, and backup systems for the entire system
- Development of internal guidelines and introduction of employee education programs
3-2-2. Unique Security Measures for Generative AI
- Audit of input prompts, including detection and response to prompt injection
- Strict access control and anonymization measures for vector databases
- Guardrail design and data flow governance when referencing external data in scenarios such as Retrieval-Augmented Generation (RAG)
- Continuous visibility and governance of third-party AI usage, along with security due diligence during contract processes
- Monitoring and auditing of AI-generated content and its secondary use
- Active utilization of “excluded from training” settings via APIs and services like Azure OpenAI
- Implementation of history minimization and automatic deletion
- Audit logging and periodic re-editing of AI-generated outputs
To address these AI-specific risks, consistent governance is required across the entire lifecycle—from data input, through AI processing, to output and distribution.
4. Utilization of the Latest Data Governance Platforms (e.g., Microsoft Purview)
Microsoft Purview is an integrated platform that centrally manages essential components for AI utilization, including data classification, visualization, DLP (Data Loss Prevention), encryption, and access control design. Its key features include:
- Prompt log visibility and auditing for AI usage, such as with Microsoft 365 Copilot→ Supports detection of prompt injection and inappropriate inputs
- Access control and data masking for vector databases through integration with Microsoft Fabric and Azure→ Enables anonymization and access management of sensitive data in AI platforms
- Automatic application of DLP, labeling, and conditional access when integrating external data→ Establishes guardrails for data flow in scenarios like Retrieval-Augmented Generation (RAG)
- Visualization of AI-related service usage and risk assessment via Purview Data Map→ Supports governance of third-party AI and security due diligence during contract evaluation
- Enhanced auditing of AI-generated outputs through automatic labeling, DLP, and eDiscovery integration→ Enables visibility and control over secondary use and leakage risks of AI outputs
- Policy management for “excluded from training” settings and automatic enforcement via API integration→ Ensures exclusion from training when integrated with services like Azure OpenAI
- Automatic deletion and minimization of history based on data retention policies→ Prevents unnecessary storage of AI usage history
- Output change history and re-edit logs through integration with Purview Audit and Microsoft 365→ Supports traceability and revalidation of AI-generated content
These capabilities enable efficient and scalable governance and insider threat detection, even in large-scale global environments and multi-cloud architectures. Deep integration with Microsoft 365, OneDrive, SharePoint, and other services facilitates smooth transition from PoC to full-scale production deployment.
Leading use cases include:
Personal DLP in retail and distribution
Strict governance and auditing in the financial sector
Automatic labeling and misdelivery prevention in education
Centralized management of IoT data in manufacturing
5. Security Operations, Education, Organizational Governance, and Continuous Evolution
In the AI era, in addition to tool introduction and technical measures, it is essential to establish a "mechanism" that integrates education, awareness-raising, and governance establishment (visualization, automation, education, governance "quartet") from management to field practitioners.
The principles of Responsible AI—"transparency," "privacy protection," "fairness," and "safety"—need to be connected to the visualization of data sources, AI activity history, risk, strict rule enforcement, and employee education. Differences in human literacy and governance maturity create significant differences in risk tolerance, and it is necessary to build a company-wide foundation with the "mechanism + education" dual approach, including the use of specialized partners.
6. Future Trends and the Age of Offensive and Defensive AI—Challenges and Winning Strategies
In 2024, AI/ML transactions increased 36-fold year-on-year, and 86% of all organizations experienced AI incidents, with the number increasing by 56% year-on-year. The attackers are also rapidly increasing distributed attacks using AI (autonomous agent collaboration, nowhere ransomware, phishing auto-generation, etc.), and the defenders need to establish automatic monitoring with Generative AI, zero-trust networks, end-to-end encryption, and real-time audit systems.
Epilogue
To make the evolution of Generative AI a true business competitiveness, it is essential to have flexible responsiveness to constant risk and regulatory changes, company-wide and multifunctional data governance, and both tool operation and education and human resource development. Evidence-based technology and operational know-how and continuous governance enhancement, and multi-layered security systems will continue to support a "resilient organization" and "trusted data strategy" suitable for the AI era.
JTP has been offering a comprehensive support service for Microsoft Purview, assisting clients throughout the entire process—from design and implementation to adoption and compliance with legal and regulatory requirements.
▼ support service for Microsoft Purview https://www.jtp.co.jp/services/security/microsoft-purview-support/