Home/Blog/Artificial Intelligence
Artificial Intelligence16 April 2026

Your Team Is Already Using AI. The Question Is Whether You're Managing It.

There's a conversation happening in every creative business right now, and it's not happening in leadership meetings. It's happening at desks, on laptops, in browser tabs your team hasn't told you about. A junior designer is using ChatGPT to write alt text and social copy. A motion artist is generating storyboard frames in Midjourney before a pitch. An architect is feeding early massing models into Veras to produce concept renders in minutes instead of days. A project manager is running meeting notes through Claude to pull out action items.

None of this is unusual. Nearly half of architecture professionals are already using AI tools in their workflows, and adoption across creative agencies is accelerating just as fast. The tools are genuinely useful, and the people using them are doing so because they make the work better and faster.

The problem isn't AI. The problem is that most creative businesses have no visibility into which tools their team is using, no policy governing how they're used, and no idea whether client work is being fed into platforms that train on the data they receive.

What Creatives and Architects Are Actually Doing with AI

This isn't theoretical. These are the tools and workflows that are already part of day-to-day life in the creative businesses we support.

Design & Branding

Adobe Firefly for generative fill, background expansion, and texture generation inside Photoshop. Midjourney and DALL-E for mood boards, concept imagery, and pitch visuals. ChatGPT and Claude for first-draft copy, taglines, and content strategy.

Architecture

Veras for AI-rendered concept visuals directly from Revit, SketchUp, and Rhino models. Autodesk Forma for early-stage site analysis and massing studies. Midjourney for client-facing concept imagery. LLMs for planning reports, specification drafting, and code compliance summaries.

Motion & Production

Runway for video generation and rotoscoping. AI-assisted editing in DaVinci Resolve and Premiere Pro. AI transcription and subtitle tools for delivery and accessibility. Storyboard generation from text prompts for pre-production.

Studio Operations

Claude and ChatGPT for proposal writing, tender responses, and client communications. AI summarisation of meeting transcripts. Automated scheduling and resource planning. Code generation for internal tools and scripts.

This is real work being done by real people in studios and practices across London and the South East. It's making teams faster, more productive, and more competitive. The issue isn't the work. It's the gap between what's happening and what leadership knows about.

The Risks Nobody's Talking About

When a designer uploads client campaign artwork to a free-tier image generator, that file may be used to train the model. When an architect pastes a planning brief into ChatGPT on a personal account, that text sits on OpenAI's servers with no enterprise data agreement in place. When a project manager feeds a client contract into an AI summarisation tool, the contents may be processed by a third party with no obligation to protect them.

This isn't scaremongering. It's what happens when people adopt tools faster than policies can keep up, and in creative businesses, where the work itself is the product, the stakes are higher than most.

Client IP Exposure: Unreleased campaign assets, brand strategies, and confidential briefs being uploaded to consumer-grade AI tools with broad licensing terms. If a client finds out their work trained someone else's model, the relationship doesn't recover.

Shadow AI: Staff using personal accounts and free tools that sit entirely outside your security perimeter. No audit trail, no data processing agreement, no visibility. The same risk as shadow IT, but moving faster and touching more sensitive data.

No AI Policy: Most creative businesses have no documented position on AI use: which tools are approved, what data can and can't be shared, and where AI-generated content sits in client deliverables. When a client asks, you need a clear answer.

The question isn't whether your team is using AI. They are. The question is whether they're using it safely, with the right tools, and with your knowledge.

Getting ahead of this isn't about restriction. It's about giving people the right way to use AI so they don't have to improvise.

What "Being AI-Ready" Actually Looks Like

You don't need to become an AI company. You need a clear, practical position on how your business uses AI, and the infrastructure to back it up. For most creative businesses, that comes down to five things.

1. An AI Acceptable Use Policy. A clear, plain-English document that sets out which AI tools your team can use, what data can be shared with them, and where the boundaries are. Not a ban, a framework. Your people will respect a policy that helps them work smarter. They'll ignore one that just says no.

2. Approved tools with enterprise-grade data protection. There's a difference between a designer using Midjourney on a personal Discord and your team using an enterprise platform with data processing agreements, access controls, and audit trails. Give people better tools and they'll stop using the risky ones.

3. Visibility into what's being used. You can't manage what you can't see. SaaS discovery tools show you which AI applications are in use across your environment, whether they're sanctioned or not. This is shadow AI governance, and it matters.

4. A position on AI in client work. Can AI-generated content appear in deliverables? Does the client need to know? What's your stance on using AI for concepting versus final output? These are questions your clients will ask. Better to have the answer ready.

5. Security that covers the AI layer. Your endpoint protection, web filtering, and cloud security need to account for AI tools the same way they account for any other SaaS application. DNS filtering, data loss prevention, and access controls all play a role.

Egnyte: AI That Stays Inside Your Walls

One of the tools we've worked with for over seven years is Egnyte, known to many of our clients as Ignite. It's our primary cloud storage platform for creative businesses, and it now includes some of the most practical AI features available to teams of your size.

Egnyte lets you create AI-powered Knowledge Bases from your own files and folders. Your team can ask questions, surface insights, and summarise documents, all from your company data, governed by your existing permissions. The AI only sees what each user is authorised to access. Nothing leaves your environment.

You can choose which AI model powers the experience (Gemini, ChatGPT, or Claude), and the platform includes controls that let you allow AI to search the internet for context while preventing any company data from being uploaded or shared externally. It's the difference between giving your team a powerful AI assistant and giving them an uncontrolled risk.

Egnyte also detects when data is being shared with unapproved external AI tools, giving you visibility into shadow AI before it becomes a problem. For creative businesses handling sensitive client work, that combination of productivity and protection is hard to beat.

Where Rubicon Fits In

We're not an AI consultancy. We're the people who look after your technology so you can focus on the creative work. But AI is now part of your technology environment whether you planned for it or not, and that means it's part of what we help you manage.

We'll review what your team is using today, help you draft a practical AI acceptable use policy, and identify the gaps between where you are and where you should be. Plain English, no jargon, ready to share with your team and your clients.

For Business-PRO clients, we're already in a position to help. SaaS discovery, web filtering, cloud security monitoring, and Egnyte's AI features: the building blocks of AI governance are already part of your managed environment. We'll help you switch them on.

AI is making creative businesses faster, more capable, and more competitive. That's a good thing. But it needs to happen deliberately, with the right tools, the right policies, and the right support around it. The studios that get this right won't just protect themselves from risk. They'll be the ones their clients trust most.

Get Clarity on Your IT & Security

We'll review your current setup, identify risks and quick wins, and outline clear next steps.

Book a Call