An immigration barrister could face a disciplinary probe after a judge ruled he used AI tools such as ChatGPT to prepare his legal research.
A tribunal heard that a judge was left baffled when Chowdhury Rahman presented his submissions, which included citing cases that were “entirely fictitious” or “wholly irrelevant”.
A judge found that Mr Rahman had also attempted to “hide” this when questioned, and “wasted” the tribunal’s time.
The incident occurred while Mr Rahman was representing two Honduran sisters who were claiming asylum in the UK on the basis that they were being targeted by a violent criminal gang called Mara Salvatrucha (MS-13).
After arriving at Heathrow airport in June 2022, they claimed asylum and said during screening interviews that the gang had wanted them to be “their women”.
They had also claimed that gang members had threatened to kill their families, and had been looking for them since they departed the country.
One of the authorities cited to support his case has previously been wrongly deployed by ChatGPT (AP)
In November 2023, the Home Office refused their asylum claim, stating that their accounts were “inconsistent and unsupported by documentary evidence”.
They appealed the matter to the first-tier tribunal, but the application was dismissed by a judge who “did not accept that the appellants were the targets of adverse attention” from MS-13.
It was then appealed to the Upper Tribunal, with Mr Rahman acting as their barrister. During the hearing, he argued that the judge had failed to adequately assess credibility, made an error of law in assessing documentary evidence, and failed to consider the impact of internal relocation.
However, these claims were similarly rejected by Judge Mark Blundell, who dismissed the appeal and ruled that “nothing said by Mr Rahman orally or in writing establishes an error of law on the part of the judge”.
However, in a postscript under the judgment, Judge Blundell made reference to “significant problems” that had arisen from the appeal, regarding Mr Rahman’s legal research.
Of the 12 authorities cited in the appeal, the judge discovered upon reading that some did not even exist, and that others “did not support the propositions of law for which they were cited in the grounds”.
Upon investigating this, he found that Mr Rahman appeared “unfamiliar” with legal search engines and was “consistently unable to grasp” where to direct the judge in the cases he had cited.
Mr Rahman said that he had used “various websites” to conduct his research, with the judge noting that one of the cases cited had recently been wrongly deployed by ChatGPT in another legal case.
Judge Blundell noted that given Mr Rahman had “appeared to know nothing” about any of the authorities he had cited, some of which did not exist, all of his submissions were therefore “misleading”.
“It is overwhelmingly likely, in my judgment, that Mr Rahman used generative Artificial Intelligence to formulate the grounds of appeal in this case, and that he attempted to hide that fact from me during the hearing,” Judge Blundell said.
“He has been called to the Bar of England and Wales, and it is simply not possible that he misunderstood all of the authorities cited in the grounds of appeal to the extent that I have set out above.”
He concluded that he was now considering reporting Mr Rahman to the Bar Standards Board.
Disclaimer: This news has been automatically collected from the source link above. Our website does not create, edit, or publish the content. All information, statements, and opinions expressed belong solely to the original publisher. We are not responsible or liable for the accuracy, reliability, or completeness of any news, nor for any statements, views, or claims made in the content. All rights remain with the respective source.