Abusing NHS statistics

UK: ‘We don’t need no thought control’ – why the Gambling Commission should leave NHS stats alone

In recent years, the Gambling Commission has been on the receiving end of criticism from all sides of the so-called gambling debate. Last year, the MP, Sir Philip Davies declared that the regulator was “out of control”, while the Social Market Foundation has described it as “not fit for purpose”. The Commission has not publicly endorsed either of these views – or advertised them on its website – presumably because it considers them to be untrue as well as unflattering. Last month, however, the Betting and Gaming Council (‘BGC’) was asked by the Commission to make claims about the prevalence of gambling harms which are probably false – and to publish them on its website.



In an email recently released under the Freedom of Information Act, the Commission wrote:
 
“We’ve been keeping an eye on use of GSGB [Gambling Survey for Great Britain] data and use of figures as the official statistic. We’ve noticed that BGC still refers to previous stats, it’s not a misuse of stat issue but we’d be keen for you to start using the official figure moving forwards.”
 

This invitation was politely declined by the BGC on the grounds that it has greater confidence in NHS statistics (which are accredited by the UK Statistics Authority) than in the Commission’s (which are not). The BGC is similarly unlikely to profess that its members are (to borrow from Blackadder) ‘head over heels in love with Satan and all his little wizards’; but the Commission can always try.  

 
The regulator’s entreaties should be considered in the light of the following circumstances:
i) the balance of evidence indicates that the GSGB substantially overstates levels of gambling and gambling harm in Britain
ii) the Gambling Commission knows this
iii) in asking the BGC to go along with the charade, the Commission is acting, at best, inconsistently
iv) the GSGB is already being used (and misused) by activists, seeking to reopen the Government’s Gambling Act Review.



We examine each of these points in turn.  

 
1. The balance of evidence
The GSGB may be the new source of official statistics, but this does not mean it provides a reliable picture of gambling prevalence in Britain. To believe that it does, it is necessary to subscribe to the following:
        i.            Every single official statistic on gambling and harmful gambling produced over the last 17 years – by the National Health Service (‘NHS’), the Department for Culture, Media and Sport and the Gambling Commission itself – has been substantially wrong
      ii.            The NHS has serially misreported the prevalence of health disorders in general – and continues to do so
    iii.            Audited data on actual customer numbers using licensed operators is incorrect (or there is a massive black market that failed to show up in previous studies and of which the Commission was previously unaware)
     iv.            The opinion of the independent review (conducted by Professor Sturgis of the London School of Economics) that the GSGB may substantially overstate true levels of gambling and gambling harm is misguided
 

To believe that all these things are true (and to cajole others into professing the same) requires more than blind faith and a sheriff’s badge. Tellingly, the Gambling Commission does not have very much confidence in the GSGB itself; and has issued guidance that key results should be used “with some caution” or not at all.


2. Withholding evidence (again)
The Gambling Commission’s defence of the GSGB has largely consisted of attacks on NHS statistics, claiming that they have under-reported rates of ‘problem gambling’. While scrutiny is important, undermining accredited official statistics on health is a step not to be taken lightly. Some sort of evidence is required. For this, the Commission has relied upon a 2022 study which claimed social desirability response bias (ie, the fact that people sometimes answer survey questions in what they consider to be an acceptable rather than accurate fashion) caused under-reporting of ‘problem gambling’ in NHS surveys. This ‘evidence’ was thoroughly debunked by Professor Sturgis as part of his independent review – but for reasons known only to the Commission, the analysis was suppressed. It required a Freedom of Information Act request to secure the release of the information. This is not the first time that the Commission has prevented publication of critical evidence – having previously withheld survey data on customer opposition to affordability checks. Disclosures also reveal the Commission was warned by its lead adviser, Professor Heather Wardle, that social desirability response bias was likely to be a “marginal factor” in explaining differences between the GSGB and Health Surveys (and that the dominant factor of topic salience bias resulted in over-reporting in the GSGB). 
 
3. Two-tier thought policing?
In recent years, various parties have taken highly selective approaches to the use of ‘problem gambling’ statistics – often ignoring official estimates in favour of more convenient alternatives. Last year, the National Institute for Economic and Social Research did so in a report funded by a Gambling Commission settlement – using a rate two or three times higher than the official statistic. There is no suggestion that the Commission objected to this. In public consultations, the Commission itself relied on ‘problem gambling’ prevalence rates from the 2018 Health Survey for England rather than lower figures from the 2021 edition (ie, the official statistics at that time). In a speech in Rome last month, the chief executive of the Commission, Andrew Rhodes criticised those who wished to “turn the clock back” to previous official statistics, and in the very same speech cited participation estimates from ‘previous official statistics’.

 
4. The weaponisation of research
The importance of all of this has been amply demonstrated in recent weeks. Both the Institute for Public Policy Research and the Social Market Foundation cited the GSGB’s inflated rates of ‘problem gambling’ in support of demands for ruinous and self-defeating tax rates (as high as 66% of revenue); while GambleAware has used the survey findings to call for tobacco-style health warnings to be slapped on all betting and gaming adverts (including those for the National Lottery). The Commission appears, therefore, to be encouraging the use of inaccurate statistics on gambling harms in the knowledge that they will be used in support of an anti-gambling agenda.

Perhaps Sir Philip had a point after all…

REGULUS PARTNERS NOVEMBER 2024

A Very Public Deception: On the manufacture of mortality statistics in gambling

Part II – Why did public health get things so badly wrong?

n the first in this series of articles, we examined the problems with claims made by state bodies – specifically Public Health England (‘PHE’) and the Office for Health Improvement and Disparities (‘OHID’) that up to 496 deaths by suicide each year in England are associated with ‘problem gambling’. We demonstrated that the basis for these claims is irretrievably flawed. Analysis of the Swedish dataset upon which they rely concluded that “gambling disorder did not appear to be a significant risk factor for the increase in suicide” (Karlsson, 2023). PHE and OHID researchers overlooked critical research findings and clear warnings about the advisability of their approach. While gambling disorder has long been recognised as a risk factor for self-harm, the estimates published by PHE-OHID are categorically unsound.

Read Part One: Lost in Translation?

In this second article in the series, we attempt to understand why PHE and the OHID persisted in following such a clearly problematic approach in the face of strong evidence of its unsuitability; we examine a number of issues of governance; and consider whether officials may have deliberately misled policy-makers and the public.

The Tobacco Road: why did PHE make such unsound claims?

In May 2018, at the conclusion of its review into gaming machines and social responsibility, the British Government’s Department for Culture, Media and Sport asked PHE to “conduct an evidence review of health aspects of gambling-related harm to inform action on prevention and treatment”.  More than three years later, in September 2021, PHE responded with the publication of five reports on the subject. One of these reports (‘The economic and social cost of harms’) claimed annual costs of £1.27bn a year associated with ‘problem gambling’ – with roughly 50% attributable to deaths by suicide.

It was this rather speculative document, rather than PHE’s more robust quantitative review of evidence from NHS Health Surveys, that officials chose to emphasise – prompting Britain’s Gambling Commission to surmise that PHE’s goal was, “to ensure gambling is considered as a public health issue.”

The Gambling Commission had already been given a glimpse of what “a public health issue” would entail. In a draft press release (seen by the Commission), PHE officials called for:

“a public health approach to gambling…similar to how we tackle tobacco consumption or unhealthy food consumption…”.

In the summer of 2022, the PHE researchers (now transferred to OHID) spelt out what this tobacco-style offensive would involve. Their paper, published in the Lancet Public Health, contained 81 measures for state intervention in the gambling market. The list included prohibitions on: all gambling advertising and marketing (including at racecourses); all in-play betting; and the sale of wine, beer and spirits in bingo clubs and casinos. It also included limits on the number of people permitted on a website at any one time, annual tax increases above the rate of inflation and even ‘plain packaging’ for all gambling products (no colours, logos or images permitted on playing cards, gaming machines, National Lottery tickets and so on).

There were other indications that PHE’s endeavours were not entirely objective – or morally neutral. In 2020, for example, its project leader stated that “more research is required to support advocacy and action” against gambling – hardly a statement of impartiality or scientific rigour. Meanwhile, documents made available under the Freedom of Information Act (‘FOIA’) reveal that PHE had agreed to be part of a research group set up by the activist charity, Gambling With Lives (‘GwL’) during the review period – an engagement it failed to disclose within its report.

Thank you for reading CIEO. This post is public so feel free to share it.

Share

Why did OHID publish its report…and did officials mislead?

In January 2023, the Department of Health and Social Care (‘DHSC’) withdrew the PHE report and published an updated set of cost estimates – this time in the range of £1.05bn to £1.77bn a year (underpinned by a choice of 117 or 496 deaths). OHID described the decision to review PHE’s work as “a standard approach for previously published reports ”; but this seems to be untrue. The decision to re-examine the PHE cost estimates alone (none of the other four reports was reviewed – despite the presence of errors) was taken in July 2022 and announced to Parliament shortly afterwards. We have found no evidence that reviewing state agency reports within ten months of publication is a “standard approach” or that any such policy exists.

Disclosures made under FOIA reveal the true reason for review. On 26th July 2022, an unnamed DHSC official circulated a memorandum, stating:

“We are going to need to make changes to two of the evidence review reports as an error has been spotted, and as it’s a change to results, its [sic.] probably what you would classify as a major change.”

Given that the PHE report contained quite a few errors, it is difficult to know which particular mistake prompted re-examination; but the decision was certainly not part of a “standard approach”. This raises the possibility that OHID may have deliberately misrepresented the grounds for review.

The Gambling Commission and the Advisory Board for Safer Gambling were both told by OHID researchers that “nothing in the report has changed substantially”; but this is incorrect. In fact, every single line item in the OHID cost estimate differed from the PHE version – in some cases substantially. Its estimate of direct costs to the Government was £234.1m lower than PHE’s – a reduction of more than one-third. This was masked by the introduction of a new area of intangible costs, relating to depression and several revisions to the suicide calculation. OHID’s estimates were also based on a ‘harmed population’ 59% smaller than in PHE. As chart 1 (below) shows, the claim that ‘nothing changed substantially’ appears misleading.

In August 2022, the then Health Minister, Maggie Throup MP advised Parliament that the PHE report would be reviewed and that the calculations underpinning its estimates would be published. The review however, has never been made public and – according to disclosures made under FOIA – no such document is held by the DHSC. Contrary to the minister’s pledge, the PHE calculations have still not been released. To do so would reveal a number of errors, such as the fact that PHE’s suicide figure was based on a 21% over-statement of the population prevalence of ‘problem gambling’.

The mystery of the OHID expert panel

OHID was at least prepared to admit – with a heavy dose of understatement – that its estimates were “uncertain”. It relied on a study of hospital patients in Sweden with a clinical diagnosis of gambling disorder (among many other health issues) to estimate the health risks for people in England with no diagnosed mental or physical health conditions whatsoever. In consequence, OHID leaned heavily on the opinion of its expert panel of health economists and academics who, it is claimed, approved the approach.

There are, however two problems where this opinion is concerned. The first is that one member of the expert panel, Dr Henrietta Bowden-Jones of the NHS had publicly criticised the PHE-OHID methodology. At a fringe meeting of the Conservative Party Conference in September 2022, Dr Bowden-Jones stated: “we cannot extrapolate from Swedish studies, from Norwegian studies – it doesn’t work”.

CIEO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Subscribed

The second issue is that the meeting of the expert panel – to discuss the most significant matter in the OHID report – is entirely undocumented. In February 2023, the DHSC admitted that:

“there was no agenda or papers shared before the meeting or minutes circulated afterwards”.

It is difficult to understand how this panel of experts might have been expected to review OHID’s work without access to any documents; and why officials did not consider it necessary to record the panel’s deliberations on this critical point.

Why did public health get things so badly wrong?

Inappropriate behaviour?

The task attempted by PHE-OHID was always going to be challenging, given the dearth of actual data available. This does not explain or excuse the large number of errors and omissions made by researchers and officials:

  • PHE and OHID ignored warnings by Karlsson & Håkansson about the representativeness of the sample in the 2018 Swedish study (upon which they relied);
  • PHE and OHID ignored findings in the 2018 study of high rates of mental and physical health comorbidities.
  • PHE and OHID ignored the follow-up study by the Swedish researchers (Håkansson & Karlsson, 2020), which found that risk of suicide attempt was significantly mediated by the presence of other disorders.
  • PHE and OHID ignored the opinion of Dr Anna van der Gaag, chair of the Gambling Commission’s Advisory Board for Safer Gambling, that the PHE calculation was likely to be inaccurate.

A large number of issues with the PHE-OHID reports were brought to the attention of its Director-General, Jonathan Marron in July 2022 and again in September 2023. On both occasions, Mr Marron promised to investigate. Last year, he wrote that he would provide “a proper explanation” for the errors and methodological flaws; but more than seven months later, none has been forthcoming. In what may well be a breach of the Civil Service Code, OHID officials resorted to ad hominem disparagement of their critics – including one national news media outlet – rather than engage constructively.

What is particularly disturbing about the PHE-OHID scandal is not the fact that researchers (presented with an unenviable task) made so many mistakes; but that state officials proved so unwilling to confront them – responding with hostility to legitimate scrutiny.

Next week, in our third article, we will consider the behaviour of others in positions of political or moral authority who variously connived in the deception or turned a blind eye to it. We will reflect on what this means for their future involvement in research and policy-making.

Dan Waugh

May 17th 2024

Regulus Partners

If the Government does not want consumers to be asked to produce bank statements and tax returns in order to spend their own money, why is this happening?

‘If the Government does not want consumers to be asked to produce bank statements and tax returns in order to spend their own money, why is this happening? ‘

Great Britain: Regulation – Will Commission‘s slight return strike a blue tone with bettors?

You don’t have to have the solution,

You’ve got to understand the problem,

And don’t go hoping for a miracle,

All this will fade away.”

‘Slight Return’, the Bluetones (1995)

The long-awaited publication of regulatory policy on affordability checks for gambling consumers in Great Britain may have provided clarity of a sort – but last week’s announcement was also notable for what it did not contain. 

The Gambling Commission’s intention to run a six-month pilot of financial risk checks had been well-trailed. It was surprising therefore that its announcement contained so little information about how the tests would be conducted; by whom; and what criteria would be used to determine success. In 2017, the Gambling Commission’s Responsible Gambling Strategy Board published an Evaluation Protocol, based on the principles of ‘robustness and credibility’, ‘proportionality’, ‘independence’ and ‘transparency’. As things stand, it is unclear to what extent – if at all – the Commission intends to comply with its own protocol (or indeed the Government’s Magenta Book).

The Protocol states, for example, that good evaluation “should include a clear articulation of what an intervention is intended to do, the outcomes it is intended to achieve, and how it is envisaged these outcomes will come about”; and also that it “has data collection which is planned before the intervention is implemented – so that, if necessary, baseline data can be collected before the policy starts.”  

The Gambling Commission has stated that the purpose of the new regulation is to create greater consistency for consumers; to regularise the patchwork quilt of trigger points and thresholds for checks that currently exists. At the same time, it has been remarkably incurious as to why this system of checks came into being in the first place and what effects it has had. If the Government does not want consumers to be asked to produce bank statements and tax returns in order to spend their own money, why is this happening? How has the existing system affected consumers, the functioning of the licensed and unlicensed markets and the finances of British horseracing? Without understanding this, how will we know that the Commission’s new system is better? Without robust analysis of the problem the policy is intended to solve, how will the success or otherwise of the pilot and any succeeding regulation be assessed?

The results of the 2021 ‘short survey’ into consumer attitudes towards affordability checks is another significant omission. Last year, the Gambling Commission committed to publish “the results from the survey”, which was completed by 12,125 individuals, thought mainly to be bettors (horserace bettors in particular were encouraged to submit their views). Instead of this, the Commission has published what might best be described as a narrative description of responses to the overall call for evidence – which is not at all the same thing. The market regulator’s reluctance to publish the actual results will prompt speculation that it perceives the views of consumers to be inconvenient or of marginal relevance to its mission. The Commission may find that a failure to do what it said it would, hinders rather than helps its goals of increasing transparency and building trust.

Last year, the Gambling Commission denied a request, made under the Freedom of Information Act, to release the survey results. It claimed that the “necessary preparation and administration involved in publishing the information” outweighed the “legitimate public interest in promoting the accountability and transparency of public authorities”. What had seemed a doubtful excuse at the time now seems highly implausible. It is difficult to believe that the composition of a single webpage on responses to the 2020/2021 call for evidence involved very much “preparation and administration”. Having been forced to wait for more than three years for publication, this single page may strike the thousands of people and hundreds of organizations who took the trouble to respond as a rather slight return. Those who believe that the Commission has no interest in the views of recreational consumers are likely to feel vindicated. In a paper published in 1999, Bill Eadington, the father of modern gambling studies, described the way that gamblers are often treated as “customers whose demands are not fully respected in the public policy formulation process.” He had a point.

Dan Waugh

E:  dan.waugh@reguluspartners.com

W:  www.reguluspartners.com