Addressing fears, risk and concerns
While AI can help, it also brings worries. People told us they feel unsure or anxious about how it works and what it might mean.
Here are some common concerns:
- Loss of human contact – People fear AI might replace real carers or reduce face-to-face support.
- Privacy – Worries about how data is collected, stored, and used.
- Bias and unfairness – AI systems can be biased or make mistakes, especially if they are trained on unfair data that does not represent everyone fairly.
- Feeling overwhelmed – Some people find the tech complicated or hard to understand.
- No choice – People fear AI being used against them rather than with them.
These fears are real, and we believe they should be taken seriously. That’s why this playbook includes:
- Tips on how to stay in control
- Stories from people who’ve used AI in their own way
- Messages to designers and professionals about what matters to people
- AI should be used with consent, compassion and collaboration – never as a shortcut to reduce human care.
Our principles for accessible, ethical and human-centred AI
We believe AI should be used in ways that are ethical, inclusive, and rooted in care.
Here are our 10 shared principles for how AI should work for people drawing on care and support:
1. People Human First – AI should support human connection, not replace it.
2. Co-designed and Co-produced – People with lived experience must shape AI tools and services.
3. Inclusive by Design – AI should work for everyone, including people with different needs and disabilities.
4. Understandable and Transparent – People should know how AI is being used and how it affects them.
5. Easy to Use – AI tools should be as simple and accessible as possible.
6. Support Independence – AI should help people live the life they choose.
7. Safe and Secure – People’s information must be protected.
8. Fair and Unbiased – AI must not discriminate or reinforce inequality.
9. Flexible and Personalised – People should be able to choose and adapt AI tools.
10. Open to Feedback and Improvement – AI should keep learning from real experiences.
These principles guide everything in this playbook. They are not just ideals – they’re a call to action.
Staying Safe and In Control
AI tools should work for you, not the other way around. Here are some tips to stay in control
- Ask questions – if you’re not sure how a tool works, ask someone you trust or search only
- Try before you rely – test out new tools and see what works for you
- Don’t share too much – protect your personal information, especially with free apps or online tools
- Check what’s saved – apps and devices often store data – explore the setting
- Set limits – just because you can use AI, doesn’t mean you have to
There’s no one-size-fits-all answer. You get to decide which tools help you – and which ones don’t.