← Back to assessments
0 of 11 questions answered0%

Q1. Are employees already using the standard chat interfaces for ChatGPT, Claude, Copilot, or Gemini for work tasks?

Select one

There's no wrong answer here. Each starting point has a different set of risks and a different set of actions. The assessment is calibrated to give you the most relevant output for your situation.

Q2. What is your primary industry?

Select one

Choosing D or E doesn't mean your team can't use these tools. It means extra rules apply to what they type in. A healthcare worker pasting patient notes into Claude has transmitted protected health information to a third-party system.

Q3. Where does your business operate and serve customers?

Select all that apply

'We're based in the US' is not a legal defence when EU customer data is involved. GDPR applies to the data, not the postcode of the business. Even a handful of EU clients brings GDPR into scope.

Q4. How many employees would use — or are currently using — these chat interfaces for work?

Select one

With more than 10 employees on personal or free accounts, you have essentially no visibility into what data is being entered. You can't audit it, control it, or demonstrate you took reasonable steps if something goes wrong.

Q5. What tasks are these chat interfaces used for — or what are you planning to use them for?

Select all that apply

If you're not using these tools yet, select what you're planning to use them for. If use is informal or uncontrolled, select what you think is most likely happening. Options B, C, and D are where risk concentrates — each involves content leaving your systems.

Q6. What personal data does your business hold about customers, clients, or staff?

Select all that apply

The key question is: does your business hold information about identifiable people — your customers, clients, patients, candidates, or employees? If so, those people have legal rights over how their data is processed. Using an AI chat interface to process it is a data processing event that triggers obligations.

Q7. What confidential or legally protected business information does your company work with?

Select all that apply

This is about your business information, not your customers data. A company with no personal data at all can still have serious exposure here — if proprietary source code, a board presentation, or a client legal file gets pasted into a free ChatGPT account, the legal consequences have nothing to do with GDPR.

Q8. Has your business enabled (or plans to enable) any integrations connecting these chat tools to other systems?

Select all that apply

This applies to built-in integrations available to any user, not custom software development. Examples include: connecting ChatGPT to your Google or Microsoft calendar via a plugin, enabling Claude to read emails in Gmail or Outlook, or giving Gemini access to Drive files. If nobody has set these up, select A. If you are not sure, select F — unknown integrations carry the same risk as known ones.

Q9. Which account tier is being used — or which are you planning to use?

Select one

ChatGPT's free and Plus plans use conversation data to train their models by default. Anything typed — customer details, financial figures, internal strategy — may become training data for a public AI model. Enterprise plans turn this off, but you must verify the setting is actually disabled after setup. If you're not sure what accounts people are using, select E.

Q10. Does your business have an Acceptable Use Policy covering these tools?

Select one

The most important clause is a clear list of what cannot be typed in. Most employees don't think of pasting a customer email into Claude or Copilot Chat as a compliance event. One sentence prevents most real-world incidents. If you haven't deployed yet, drafting the policy first is the right order.

Q11. Have employees been trained on what not to type into these tools?

Select one

Minimum viable training: a 20-minute session covering what these tools are, what cannot be typed, and how to report a concern. Keep an attendance record — that record is your evidence of reasonable steps if a data incident occurs later. If you haven't deployed yet, run training before you grant access.