The decision no one is making
In most family offices, nobody has decided how AI should be used. That silence is itself a decision, and it carries consequences.
In most family offices, there is no AI ban. There is also no AI policy. There is no stated position on which tools are appropriate, which data can be shared with external models, or how AI-generated outputs should be reviewed. There is simply silence.
Into that silence, individuals make their own decisions. An analyst uses a language model to summarise a capital call document. An assistant drafts correspondence through a generative AI tool. Someone on the investment team feeds a portfolio summary into an external model to pressure-test a thesis. None of this is sanctioned. None of it is prohibited. It happens because no one has said otherwise.
This pattern has a name in larger organisations: shadow AI. Cross-industry research suggests that over half of knowledge workers now use AI tools their employers have not provided or approved. A KPMG study of 48,000 professionals across 47 countries found that more than three-quarters use AI in a professional context, with over half doing so contrary to their company’s guidelines. These are capable people responding to workload pressure with the most effective tools available to them.
In family offices, the dynamic is even more pronounced. Many offices operate with five or fewer employees. The people handling sensitive data, including family structures, holdings, legal arrangements, and counterparty details, are often the same people drafting memos, preparing reports, and managing communications. PwC has described the current state of AI use in family offices as “citizen-led”: staff adopting readily available tools for immediate productivity, ahead of any formal organisational position. What that sounds like is adoption without governance. Obviously something the consulting houses will play into.
What forms in the absence
The regulatory dimension is real and expanding. New legislation in the EU and several US states is beginning to assign legal responsibility to users of AI, not only developers. Family offices, which sit outside most formal supervisory regimes, may find themselves within scope of these provisions without realising it.
Yet compliance alone does not capture the deeper issue. The more immediate consequence of silence is organisational. When there is no stated position on AI, three things happen by default. Data exposure is determined by individual judgment rather than institutional policy. Skill development around AI becomes uneven and invisible. And the office’s culture around technology, what is encouraged, what is questioned, what is shared, forms without deliberate input from leadership.
In a five-person office, the principal, CEO or CIO often sets the conditions for how work gets done. They shape what is valued, what is scrutinised, and where the organisation invests its attention. When capable people begin solving problems with tools that leadership has neither endorsed nor examined, something important has taken place. The most consequential technology decision of the decade has been delegated to whoever moved first.
The absence of a position is itself a position. It transfers the decision to the people closest to the work, without giving them a framework for making it well. And in an environment built on trust, discretion, and institutional continuity, that transfer is a particular kind of vulnerability. It compounds unnoticed, because the people filling the gap are often the most competent members of the team, and the work they produce with these tools may be perfectly good. The risk is not poor output and workslop. The risk is that the organisation has no visibility into how work was produced, what data was used or shared and no framework for evaluating what was shared externally, and no foundation for building on what has been learned.
A stated starting point
What is needed is a leadership posture on what the office encourages, what it permits, what it considers off-limits for now, and what it wants to learn over time. This might take the form of a short internal document. No matter how brief, it might begin as a conversation at the next team meeting. It might be a single page of principles that establishes a starting point.
The content matters less than the act of making it explicit. An earlier edition of this publication argued that AI should be onboarded with the same care we apply to people, and that context comes before capability. The same instinct applies here. You would not hire someone and leave them to determine the organisation’s culture on their own. You would not expect a new team member to figure out which data is sensitive, which relationships require discretion, or how decisions are documented. These things are communicated. Leaving them to chance introduces risk that a small team cannot easily absorb.
The offices navigating this moment well are the ones where someone decided to decide. The decision itself, which tools, which guardrails, which boundaries, is less important than the act of making it. Because once a position exists, even an imperfect one, it creates the conditions for learning. It gives people something to work within and push against. It makes skill development visible. It turns scattered individual experimentation into shared organisational knowledge.
Silence could feel like you’re exercising an abundance of caution. In practice, it is the opposite.



This is an excellent post on the importance of communication and consciousness. Would love to dive deeper. This was really sharp « When capable people begin solving problems with tools that leadership has neither endorsed nor examined, something important has taken place. The most consequential technology decision of the decade has been delegated to whoever moved first.”