Your employees are already using AI at work. The question is whether you know about it.
According to a Salesforce survey of 14,000 workers, more than half of employees using generative AI at work do so without formal employer approval. Gartner reported that over half of all enterprise generative AI usage qualifies as shadow AI — tools adopted outside of IT oversight, without security review, and without anyone tracking where company data ends up.
For businesses in Dallas-Fort Worth — one of the fastest-growing metros in the country and the fourth-largest metro economy in the US — this is not a hypothetical risk. It is happening right now, across every industry.
What Is Shadow AI?
Shadow AI is the use of AI tools by employees without the knowledge or approval of their IT team or leadership. Think of an engineer pasting proprietary code into ChatGPT to debug it. A paralegal feeding client case details into an AI tool to draft a summary. An HR coordinator running employee data through an AI platform to generate performance reviews.
None of these employees are acting maliciously. They are trying to be more productive. But every one of these actions sends company data to a third-party system with no security controls, no audit trail, and no guarantee about how that data will be stored or used.
The Varonis 2025 State of Data Security Report found that 98 percent of employees use unsanctioned applications across shadow AI and shadow IT. And the Auvik 2025 IT Trends Report revealed that 34 percent of IT professionals lack a formal AI policy — meaning even the people responsible for technology governance often have no framework to manage this.
Why DFW Businesses Are Especially Exposed
Dallas-Fort Worth is home to 22 Fortune 500 companies and a GDP that would rank among the top 25 economies in the world if it were a country. The metro’s economy is driven by exactly the industries where shadow AI creates the most risk: healthcare, finance, energy, manufacturing, legal services, and professional services.
Several factors make DFW businesses particularly vulnerable:
Rapid growth means governance gaps. DFW is one of the fastest-growing metros in the US. Companies scaling quickly often prioritize speed over policy. New employees bring their own AI habits from previous employers, and there is rarely an onboarding process that addresses acceptable AI use.
A knowledge-worker economy. With average hourly earnings of $37.71 — above the national average — DFW’s workforce skews heavily toward knowledge workers. These are exactly the employees most likely to adopt AI tools independently.
Workforce churn. The Dallas Federal Reserve reported that professional and business services saw some of the largest job losses in late 2025, while DFW’s office vacancy rate hit 27.4 percent. Periods of organizational change and restructuring correlate with less policy enforcement and more employees looking for productivity edges.
Texas’s regulatory landscape is shifting. The Texas Data Privacy and Security Act (TDPSA) took effect on July 1, 2024, imposing requirements around personal data processing — including data protection assessments for activities that present heightened risk. When employees feed customer data into unauthorized AI tools, your company may be violating the TDPSA without even knowing it. Penalties run up to $7,500 per violation, enforced by the Texas Attorney General.
Industry-by-Industry: Where Shadow AI Hits Hardest in DFW
Healthcare
DFW is home to massive healthcare systems — Baylor Scott and White, UT Southwestern, Texas Health Resources, Parkland, and Children’s Health among them. The shadow AI risk here is severe.
The Salesforce survey found that 87 percent of healthcare workers say their company lacks clear AI policies. Meanwhile, clinical staff are using AI to summarize patient records, draft notes, and analyze lab results. Every instance where protected health information enters an unauthorized AI tool is a potential HIPAA violation — with penalties ranging from $100 to $50,000 per violation and up to $1.5 million annually per category.
Beyond compliance, there is a clinical safety concern. AI hallucinations — confident but fabricated outputs — in a medical context could contribute to misdiagnosis or treatment errors. Revenue cycle staff using AI to process insurance claims create additional exposure by routing patient financial data through unsecured channels.
Finance
Charles Schwab’s headquarters in Westlake, multiple private equity firms, Comerica, and significant regional banking operations make DFW a major financial hub. Shadow AI in finance creates overlapping regulatory exposure.
Employees using AI to generate financial analyses without compliance review risk SEC and FINRA violations. AI-assisted financial reporting outside of documented internal controls creates SOX compliance gaps — the exact kind of gap that auditors are trained to find. Customer financial data processed through unauthorized AI tools violates data handling requirements, and material non-public information pasted into AI tools creates insider trading risk through potential data leakage.
Anti-money laundering analysis performed through unauthorized AI lacks the required audit trails that regulators demand. The consequences are not just fines — they are enforcement actions that can shut down business lines.
Manufacturing
DFW’s manufacturing sector includes defense giants like Lockheed Martin’s F-35 operations in Fort Worth, Bell Textron’s helicopter manufacturing, and significant aerospace, electronics, and food manufacturing operations including Frito-Lay’s headquarters in Plano.
The Samsung incident is the cautionary tale here. Within 20 days of allowing ChatGPT access, Samsung semiconductor employees leaked proprietary source code in three separate incidents — including chip optimization code and a confidential meeting recording pasted in to generate minutes. Samsung subsequently banned all generative AI tools company-wide.
For DFW defense manufacturers, the stakes are even higher. Technical data controlled under International Traffic in Arms Regulations (ITAR) that enters an AI tool hosted on foreign servers constitutes an unauthorized export. Penalties reach up to $1 million per violation or 20 years imprisonment. Engineers pasting proprietary designs, process specifications, or supply chain data into AI tools expose trade secrets and competitive intelligence that may lose legal protection under the Texas Uniform Trade Secrets Act if “reasonable measures” to maintain secrecy were not in place.
Legal
DFW is one of the largest legal markets in the US, with firms handling high-value matters across energy, healthcare, corporate law, and litigation. Shadow AI in legal practice creates uniquely dangerous risks.
The most visible example came in the Mata v. Avianca case, where New York attorneys were sanctioned and fined for submitting a brief containing six entirely fictitious cases generated by ChatGPT. Texas courts have since implemented AI disclosure requirements in response.
But the deeper risk is privilege waiver. Client communications pasted into AI tools may legally waive attorney-client privilege — a catastrophic outcome in active litigation. The Texas Disciplinary Rules of Professional Conduct require competent representation under Rule 1.01 and confidentiality under Rule 1.05, both of which are implicated when lawyers use unauthorized AI to process client data. Case details, settlement amounts, and litigation strategies fed into AI tools create exposure that no malpractice policy is designed to cover.
Oil and Gas
DFW serves as a corporate hub for Permian Basin operations. ExxonMobil is headquartered in Irving, and companies like Pioneer Natural Resources, Kinder Morgan, and Targa Resources manage billions of dollars in exploration and production from DFW offices.
Geologic surveys, seismic data, well logs, and drilling techniques represent some of the most valuable proprietary data in any industry. An employee pasting exploration data into an AI tool to run quick analysis is handing competitive intelligence worth potentially billions to a third-party system. Proprietary fracking formulations, refining processes, and pipeline operational data all carry similar risk.
Environmental compliance adds another layer. AI-generated environmental reports that bypass proper verification could lead to EPA or TCEQ violations. And operational data about pipeline locations and facility capacities shared with AI tools creates physical infrastructure security concerns.
Architecture and Engineering
DFW’s ongoing construction boom — reflected in 8.1 million square feet of industrial real estate net absorption in Q4 2025 alone — drives significant architecture and engineering activity. Major firms like Jacobs and AECOM maintain substantial DFW operations.
Architects and engineers pasting project designs, structural calculations, or site plans into AI tools expose client intellectual property with no confidentiality protections. AI-generated calculations or design elements that bypass peer review create professional liability risk. Licensed professionals — those with PE or architect stamps — remain personally liable for AI-assisted work product regardless of how it was generated.
Bid data is another vulnerability. Cost estimates, project bids, and sourcing strategies pasted into AI tools could leak competitive pricing through training data. And AI may generate designs that fail to meet Texas-specific building codes, particularly wind resistance and storm protection requirements critical in the DFW area.
The Pattern Is Always the Same
Across every industry, the shadow AI pattern follows the same arc:
- An employee discovers an AI tool that makes their job easier
- They start using it — with good intentions — without telling IT
- They paste company data, client information, or proprietary content into the tool
- That data now lives on a third-party server with no security controls
- The company has no visibility into what was shared, no audit trail, and no way to get the data back
The Salesforce data confirms this: 64 percent of employees have passed off AI-generated work as their own, and nearly 7 in 10 have never received training on safe AI use. In low-enablement environments — companies without clear AI policies — 70 percent of employees use AI without their manager knowing.
What You Can Do About It
Banning AI is not the answer. Samsung tried that, and the reality is that employees will find workarounds. The businesses that will come out ahead are the ones that channel AI adoption rather than fight it.
Start with visibility. You cannot govern what you cannot see. Audit your network traffic and application usage to understand which AI tools employees are already using and what data they are sharing.
Create a clear AI acceptable use policy. Define which tools are approved, what data can and cannot be entered, and what the consequences are for violations. This does not need to be 50 pages — even a one-page policy dramatically reduces risk.
Build an approved tool list. Evaluate AI tools for security, compliance, and data handling before employees adopt them. Offer sanctioned alternatives so employees do not feel forced to go rogue.
Train your team. The data is clear — nearly 70 percent of workers have never received AI safety training. A single training session covering what not to paste into AI tools can prevent your next data breach.
Implement technical controls. Data loss prevention tools, endpoint monitoring, and network-level controls can detect and block unauthorized AI usage before sensitive data leaves your environment.
Establish ongoing governance. AI is evolving fast. Quarterly reviews of your AI policy, approved tool list, and usage patterns keep your governance framework current as new tools and risks emerge.
Parlay Technology Can Help
Shadow AI is not a problem you solve once — it is an ongoing governance challenge that requires the right strategy, policies, and technical controls. At Parlay Technology, we help Dallas-Fort Worth businesses build AI governance frameworks that protect their data without killing productivity.
From AI readiness assessments and acceptable use policies to secure tool integration and employee training, we work with businesses across healthcare, finance, manufacturing, legal, oil and gas, and architecture to bring AI adoption under control. We understand the regulatory landscape — TDPSA, HIPAA, SOX, ITAR — and we build governance programs that satisfy compliance requirements while letting your team use AI effectively.
Your employees are already using AI. The only question is whether your business is ready.
Ready to get ahead of shadow AI? Contact Parlay Technology for a free consultation and find out where your business stands.