What does the future of care look like?
Will it soon be possible to outsource our caring responsibilities for ourselves, our children and our parents to robots? Catherine Smith and Helen Dickinson ask what the human rights, privacy, equity and practical implications for care would be in a tech-dominated future.
This is the era of the so called ‘sandwich generation’with busy professionals caring for children and ageing parents. Imagine being able to more effectively manage both sets of care relationships via a series of new technologies - and better look after yourself in the process. That’s the future being promoted by a number of startup tech firms at a recent showcase.
Here we saw tech that allows you to monitor your children via smart devices. Through this you can check out where they are, how they are performing in school, how much screen time they are consuming (and remotely cease this if you think it is too much). The next big consumer boom in the med tech space is predicted to be in genomic testing. So you will know just what to feed your children given your knowledge of their predispositions to certain conditions and intolerances. Your smart kitchen ensures that you are always fully stocked on necessities, by automatically ordering products you run out of.
When you have a few minutes in your day, you check in with your robot life coach to view your own vitals and see how you are tracking in relation to a number of your life goals. Maybe you even do this while moving around in your autonomous vehicle, which is safer than you personally driving the vehicle and frees you up to work on the move. Your home personal assistants even monitor your speech patterns to check for symptoms of depression or Parkinson’s.
All of this you can do safe in the knowledge that your parents are well and being constantly monitored via wearables or in-home robots. These will tell you if they should suffer a fall or if one of a number of pulse, blood oxygen or other readings indicate something of concern. If anything should cause worry you can be immediately connected to a healthcare professional who can also access your parent’s personal data and advise on courses of action - all supported by artificial intelligence.
Sounds pretty cool, right? There are huge number of companies emerging that are keen to support you to more effectively “manage” your personal and collective caring responsibilities. But what costs does this come at and are there aspects of this we should be concerned about?
These potential applications raise a number of important questions, many of which have ethical and moral dilemmas. How safe is this data that is being shared and who owns it? Blockchain is widely employed as a way of ensuring that this is kept and transmitted safely, but is this infallible? If your DNA is being profiled who are you happy being able to access this? Maybe you want your GP to see this, but what about your insurance company? What about researchers? With a big enough data we might be able to make some new breakthroughs in the health arena. So should we all consent to share our anonymised and aggregated data? The recent response to the My Health Record scheme suggests that many of us are wary about this.
Would knowledge of genomic predispositions make us behave more “rationally”? If you knew that you were more likely to develop heart disease would you eat more healthily? Conversely, if this wasn’t a worry for you would you then engage in more “risky” behaviours? If we know one thing for sure it is that people don’t always behave in ways that are predictable or considered the more ‘rational’ option.
Some of the companies we spoke with talked about offering incentives to individuals to share different aspects of their data in return for vouchers or discounts off other products and services. While this avoids a situation where individuals are not rewarded for the use of their data, it raises potential equity issues. Those most likely to respond to such incentives are likely to be those of lower socio-economic groups. Indeed, the Australian Human Rights Commission has recently raised a number of concerns relating to technology and their potential to enhance inequities.
Although the discourse around many of these technological developments is that they should make us more “safe”, there is already evidence that suggests reason for concern. Addiction to technology is a real concern for many, particularly in relation to younger children. China has uniquely set legislation addressing this after concerns about wellbeing and intense game play regarding on-line game Honour of Kings. The developer, Tencent, responded by limiting users to the amount of time they can spend on the game and at what times. Although welcome by some, this a blunt approach and does little to address our obsession with electronic devices more broadly.
A feeling of safety in many of the examples we came across typically also involved some significant surveillance, either via camera, by data or a combination. Although this might lead to some of us feeling more secure, for others this could come with concerns about issues of human rights. A recent story in the New York Times highlights the issues associated with the expansion of facial recognition software and surveillance in China and some significant concerns about this within the context of an authoritarian regime.
It is clear that there are some exciting developments to come in the technology space that will have a profound impact on our everyday lives. But these developments also bring with them a series of potential negative impacts and associated ethical and moral concerns. Although some of these developments are some way in the future, many of these exist in our everyday lives now. Yet, one of the issues we are coming across in our research into the use of robots in care services is that governments are not yet systematically having conversations in a widespread way about what these technologies mean in terms of the ways in which we design and deliver public services and some of the challenges that these raise. A failure to consider these issues will likely mean that the first time we ever really consider these issues is when some sort of incident arises, by which time it will be too late.
While there is much to be excited about in terms of the future of care services, there are some developments that should give us pause for thought. It is a future we need to prepare for to ensure that we get the type of care services we want and need.
This post contributed by Catherine Smith, Melbourne Graduate School of Education, University of Melbourne and Helen Dickinson, Public Service Research Group, University of New South Wales, Canberra.