Cashless Debit Card extended despite limited evidence of effectiveness
Today’s post from Janet Hunt questions the extension of the Cashless Debit Card to more people receiving income support in Australia in light of its problematic evaluation. Associate Professor Hunt is a former president of ANTaR, a member of ANTaR ACT, and researcher at the Centre for Aboriginal Economic Policy Research. Her research has focussed on Indigenous governance, government engagement with Aboriginal and Torres Strait Islander communities, and community development over many years. This post was originally published on the ANTaR website.
When late in 2020, the Government introduced legislation to extend the Cashless Debit Card (CDC) in all existing sites and provide for the transition of income management participants across the Northern Territory and Cape York region to the CDC, it became known that an evaluation of the Program had been completed but was not being made public.
Since earlier evaluations had been proudly trumpeted by Ministers who claimed the great success of the trials, it was strange indeed that this time, the evaluation was kept behind closed doors as decisions were being made about the future of the program.
The legislation nearly failed to pass, as Senator Patrick said he had not seen evidence the card was effective, but a last minute amendment extended the trials for another two years, and allowed people on the Basics Card in the NT to have the choice to shift to the CDC.
On 17 February 2021 this evaluation was rather quietly released on the DSS website. In three parts, it is a massive tome, and its findings give a very much more nuanced picture than the original ORIMA evaluation, which Government has consistently trumpeted to justify continuation and extension of the program. This 2021 Evaluation Report came about following extensive criticism of the ORIMA evaluation by participants in the various Parliamentary Hearings that have characterised the process of extending the CDC program. Government clearly hoped that this report would be definitive about whether the “trials” in the three original sites (Ceduna, East Kimberley and the Goldfields) are working. But it presents a very mixed picture indeed, as Henrique Gomes reported.
What emerges from this evaluation is a picture of a small number of participants who find the CDC useful and would like to remain on the card to help them manage their money (approximately 11-20% depending on the site). But around half of these (7-10% depending on the site) would like less of their payment on the card and more in cash. However, an overwhelming 70-76% wanted to get off the card. Participants were much more likely to think that the trial should end, while ‘stakeholders’ were more likely to want it to continue.
There is a large proportion of participants on the card who do not use alcohol, drugs or participate in gambling, who resent being on it, and find it makes their lives worse, and the argument for leaving them on the card is difficult to find. Interestingly, since the card is at heart a money management tool, the researchers found that ‘those who need the card most and /or can handle its complexity best… report that it makes things easier for them financially. In contrast, those who need it least and /or can least handle its complexity report that it makes their financial situation harder for them.” (p95). This raises the question why pursue it with those who need it least?
One of the most troubling unintended consequences of being on the CDC is the stigma, shame and embarrassment people feel. A large majority of participants at each site felt discriminated against (75%), embarrassed about being on the card (73%) or that being on the Card was not fair (75%) and this was true for all demographic groups on the Card. (p156) If people do not need to be on the card for any reason why are they being subjected to such shame and discrimination?
The evaluation relies almost entirely on the opinions of CDC ‘stakeholders’ and participants on the card (including some of their family members) either through a survey or through interviews. While people’s opinions certainly count for something, these are not triangulated with any other sources of evidence except in one or two cases. As well as CDC participants who were surveyed and interviewed, 178 stakeholders were interviewed from various organisations with involvement or an interest in the program Of these 21 were government organisations (all levels of government), 16 were local CDC partners or shopfronts, 31 were welfare and advocacy organisations, 19 were Indigenous run organisations, 13 were employment services, 14 were merchants and 5 were police.
The quantitative report found that little administrative data is suitable for the purposes of the evaluation, so there is little of this type of evidence used to check against people’s opinions. Interestingly, the domestic violence data presents the strongest findings, and in this case, the data shows that domestic violence reports increase in the East Kimberley after the CDC roll out, compared to a control group of suburb locations, and there is also (weaker) evidence about increases in property damage and stealing in that site.
One of the most striking features of the report is that it illustrates how different the contexts and the problems are in the three trial sites, and yet, in each, the Card is meant to provide a solution – this is ‘magic bullet’ thinking. For example, excessive drinking is a much larger problem in the East Kimberley than in the other two sites, whereas illegal drugs are much more a problem in Ceduna and even more so in the Goldfields. (p.65 and p 71) In the Goldfields there are also big differences between Indigenous and non-Indigenous participants in relation to Illegal drug use, prompting the evaluators to comment:
“At the heart of these differences lies the question of how much a single design policy can be applied to diverse parts of the Australian population.” (p77)
Equally, gambling seems to be a greater problem in Ceduna than elsewhere, with 22% of CDC participants reporting that they gambled, whereas only 6% in East Kimberley and 11% in the Goldfields did so. The evaluation found that the CDC was helping a small reduction in gambling that was found, but this reduction was greatest in the East Kimberley where the problem was least serious and where Police had also been breaking up outdoor card games in public places, (p81) so it may simply have moved elsewhere.
So, while it is a much better report, which acknowledges all the problems of attribution that had been raised by the critics of the ORIMA report (including myself), it is inevitably limited in its capacity to definitively state whether the card is or is not a success. The report states:
“If one were to try to summarise how the CDC policy outcomes are perceived, a fair assessment would be to say that: (i) many of the CDC outcomes were intended but some were not; (ii) some of the CDC outcomes were the same or very similar in all locations and for most CDC participants: and (iii) other outcomes were different in different locations and for different participants.” (p200)
This raises the question of how fair and ethical it is to perpetuate a policy when those promoting it cannot really separate its effects from a range of other interventions or know what sorts of effects it will have on different groups or individuals in different contexts – particularly when some of those effects are quite negative, and the people affected are already on very low incomes and may be quite vulnerable.
Content moderator: Sue Olney