When Bad Software Sent People To Jail
Over 900 postmasters in the UK were convicted of crimes due to faulty software; what faulty AI could do in the future is the concern.
Aaron’s Thoughts On The Week
“National security always matters, obviously. But the reality is that if you have an open door in your software for the good guys, the bad guys get in there, too.”
- Tim Cook
Between 1999 and 2015, over 900 sub-postmasters in the United Kingdom were prosecuted for stealing money from the local Post Offices they were tasked with operating. The main witness against them was a software application called Horizon developed by Fujitsu that was introduced by the British Post Office in 1999.
Horizon was developed for tasks such as accounting and stocktaking. Sub-postmasters, who are self-employed and run branch post offices under contract to the Post Office, began to report balancing errors to the Post Office within weeks of the Horizon system being installed. Those concerns fell on deaf ears even as prosecutors prosecuted another sub-postmaster every week for supposedly stealing money, per the Horizon system.
The Post Office, though, continued to claim that Horizon was reliable and did not disclose any knowledge of issues while obtaining convictions. However, in 2009, Computer Weekly revealed issues with Horizon, and a sub-postmaster named Alan Bates formed the Justice for Sub-Postmasters Alliance (JFSA) to raise the voices of the wronged sub-postmasters. Due to pressure from campaigners and Members of Parliament in 2012, the Post Office employed forensic accountants from Second Sight to investigate Horizon. Second Sight discovered that Horizon had faults that could lead to discrepancies in accounting, but the Post Office continued to deny any widespread problems with the software.
In total, the Post Office prosecuted 700 people, and another 283 cases were brought up by other bodies. The legal cases resulted in severe consequences for the victims and their families. These consequences included criminal convictions, imprisonment, loss of livelihoods and homes, debts, and bankruptcies, which led to stress, illness, family breakdown, and, tragically, even suicide. In 2017, 555 sub-postmasters took legal action against the Post Office, and in 2019, the Post Office agreed to pay them £58m in compensation. However, many of those convicted suffered financial ruin. As of January 15, 2024, only 95 convictions had been overturned despite campaigners winning the right to reconsider their cases. The Metropolitan Police are now investigating the Post Office for potential fraud offenses related to the prosecutions.
On January 1, 2024, ITV broadcast a compelling four-part drama recounting the scandal (PBS also broadcast the drama). Unlike typical financial scandals driven by human greed or misconduct, this debacle fell on the software and a system that failed to acknowledge the problem. Its lessons reverberate deeply among IT professionals, particularly in an era of transformative technologies such as generative AI.
A Cautionary Tale For The Future of AI
The emergence of ChatGPT and its numerous replicas has reshaped the global landscape. While the narrative surrounding generative artificial intelligence (AI) has often been tinged with negativity, many of us still see the positive outcomes it can produce. Undoubtedly, jobs will be impacted, and the rapidity of generative AI is undeniably impressive. It has already disrupted numerous industries in good and bad ways, but I still believe it holds positive potential for everyone.
However, setting that aside, I find myself profoundly shocked and dismayed by the UK's revelations regarding the Post Office Horizon scandal and its devastating impact on sub-postmasters due to the failure of the Fujitsu system intended to aid them. While those at the Post Office and Fujitsu may face legal repercussions in due course, such unchecked and overbearing technology underscores the necessity for human intervention to prevent a technology juggernaut from wreaking havoc.
Dr. Dan McQuillan, Lecturer in Creative & Social Computing at Goldsmiths, University of London, has warned of the dangers of Artificial Intelligence, claiming that it will create more scandals like Horizon. In a recent interview, McQuillan stated:
“One thing the Horizon IT system and AI have in common is their fallibility; both are complex systems which generate unpredictable errors. However, while the bugs in Fujitsu’s bodged accounting system stem from shoddy software testing, AI’s problems are foundational. The very operations that give AI its ‘wow factor’, like recognizing faces or answering questions, also make it prone to new kinds of failure modes like out-of-distribution errors (think Tesla self-driving car crashes) and hallucinations.
“Moreover, thanks to the internal complexity of their millions of parameters, there’s no ironclad way to figure out why an AI system came up with a particular answer. AI doesn’t even need to get to court to create problems of legality; this inherent opacity is the antithesis of any kind of due process. Moreover, language models like ChatGPT make unreliable witnesses because they are actually trained to produce untruths. Such systems aren’t optimized on facts but on producing plausible output (a very different thing). Even when they sound right, they are literally making things up. Woe betides the unwary citizen that turns to AI itself for legal advice; many have already been roasted by unsympathetic judges when it turns out they cited fabricated case law.”
This scenario should serve as a prime case study for regulating generative AI, not to prevent its use but to ensure it is used correctly and that ways to address issues as they arise are agreed upon. There must be "humans in the loop" to safeguard against machines trampling over human lives. The instances of generative AI "hallucinations" being perceived as accurate representations of reality are likely to be numerous and this is simply unacceptable. Educating the public on all of this should also be part of a regulatory framework.
With AI, processes that once took months can now be accomplished in under 48 hours, whether developing technology models, assembling project teams, creating professionally narrated videos, or complex spreadsheets. Even the most optimistic individuals are astonished by generative AI's capabilities and appear to have wholeheartedly embraced it. But that speed can mean negative consequences can happen much faster. One would have thought that British prosecutors would have started wondering WHY they were prosecuting another sub-postmaster every week, if they were not prior to Horizon’s installation. Did they think the software was so good that it was able to uncover this many people stealing over and over again? And over a decade, that sub-postmasters would not have caught on to that Horizon would catch them if it was indeed that good?
The need for human involvement is rapidly diminishing for some AI applications, but many concerned about AI’s speed are starting to prompt some pushback to this lack of participation. The proliferation of apps and automated systems designed primarily for the system's benefit rather than the users is becoming widespread, with discretion mainly sidelined.
Canada’s largest airline recently was ordered to pay compensation after its AI-powered chatbot misled a customer into buying a full-price ticket. Air Canada was further criticized for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions.” Amid a broader push by companies to automate services, the case – the first of its kind in Canada – raises questions about the level of oversight companies have over the chat tools. Consequently, some companies are now promoting their use of human intervention, not merely within the loop but at the forefront of feedback mechanisms.
Consider if a similar Horizon scenario unfolded in 2024, with AI as the culprit. Some individuals in the loop may be reluctant to challenge the machine, as their livelihoods would be at stake. While hallucinations resulting from generative AI procedures present an ongoing challenge, a regulatory independent body must be in place to swiftly arbitrate such matters.
While generative AI is not flawless and may introduce new challenges, judicious integration of AI and human partnership could minimize future miscarriages of justice. Though too late for the sub-postmasters, the widespread adoption of AI—with humans either in the loop or overseeing processes—can be a force for good in the world. It's incumbent upon all stakeholders to commence this work of developing standards and regulations for AI now before it's too late. Otherwise, there may be more events like Horizon, but much worse, which is scary based on how destructive the Horizon event was and continues to be.
Robot News Of The Week
Agility Robotics lays off some staff amid commercialization focus
Agility Robotics confirmed layoffs, stating that it is part of a company-wide focus on commercialization. The Oregon-based firm is focused on meeting the extraordinary demand for bipedal robots across industrial use cases. The company is ramping up production of Digit while continuing to win top-tier global customers. Agility has made a number of high-profile hires over the past year.
Kiwibot acquires AUTO to strengthen delivery robot security
Kiwibot has acquired AUTO Mobility Solutions Co. to enhance their cybersecurity measures for AI-powered robotics. The move strengthens their market position and connects the manufacturing expertise from Asia with the AI development in the West securely.
Apple reportedly exploring personal home robots
Apple is reportedly considering the development of personal home robots following the cancellation of its electric vehicle project. The company's engineers are said to be exploring designs for a robot that can move around homes and a tabletop device that adjusts a display screen using robotics. The EV project was abandoned in February. The Apple has a growing history of false starts and not really getting outside of their core products, paint me skeptical currently on this. As stated, they already gave up on an EV project which they have been talking up for years.
Robot Research In The News
Teaching robots to walk on the moon, and maybe rescue one another
A team of engineers, scientists, and NASA gathered on Mount Hood in Oregon to test a four-legged robot named Spirit as part of the LASSIE Project. The robot covered a variety of challenging terrains, using its spindly metal legs to amble over, across, and around shifting dirt, slushy snow, and boulders. The data collected from Spirit's practice time will be used to train future robots for use on intergalactic surfaces such as the moon and planets in our solar system.
Toyota’s “Bubble-ized” Humanoid Grasps With Its Whole Body
The Toyota Research Institute has developed a new humanoid robot called Punyo that uses its soft body to manipulate objects that are difficult to handle with grippers alone. Unlike the conventional approach to robotic manipulation, which focuses on grippers, Punyo's design enables the robot to use its whole body. The robot is equipped with air bladders that provide sensing and compliance, allowing it to accomplish many tasks using its arms and chest. The institute aims to create hardware and software solutions that can enable future and existing robots to take full advantage of their entire bodies for manipulation.
Robot Workforce Story Of The Week
Yaskawa Motoman and RAMTEC Announce the Ohio Manufacturing Workforce Partnership
Yaskawa Motoman and RAMTEC have formed the Ohio Manufacturing Workforce Partnership to provide Ohio educators and students with STEM-aligned curriculum and training in Industry 4.0 technologies, aimed at creating a highly sustainable workforce development model. The strategic partnership will help bridge the skills gap in the manufacturing industry and provide proficient training and curriculum in automation and robotics. The project development funding is earmarked for $400,000 and will include the creation of in-lab and classroom instructional strategies, development of industry-recognized robotic certifications, and utilization of augmented and virtual reality technologies.
Robot Video Of The Week
China's UBTech has partnered with Baidu to give its new industrial Walker S humanoid the power of natural speech and real-time reasoning. The new Walker S was one of two humanoids onstage with execs in Hong Kong last year, to celebrate the listing of the company on the stock exchange. It showed off its chatting, folding, and pick-n-place skills. The response time between human vocal prompt and response from the Walker S is quick. The robot uses ERNIE Bot and variable learning models to work out what task is required, and how to complete it.
Upcoming Robot Events
Apr. 7-10 Haptics Symposium (Los Angeles, CA)
Apr. 14-17 International Conference on Soft Robotics (San Diego, CA)
May 1-2 The Robotics Summit & Expo (Boston, MA)
May 6-9 Automate (Chicago, IL)
May 6-10 The Plastic Show (Orlando, FL)
May 13-17 IEEE-ICRA (Yokohama, Japan)
June 4-5 Smart Manufacturing Experience (Pittsburgh, PA)
June 24-27 International Conference on Space Robotics (Luxemborg)
July 2-4 International Workshop on Robot Motion and Control (Poznan, Poland)
July 8-12 American Control Conference (Toronto, Canada)
Aug. 6-9 International Woodworking Fair (Chicago, IL)
Sept. 9-14 IMTS (Chicago, IL)
Oct. 1-3 International Robot Safety Conference (Cincinnati, OH)
Oct. 8-10 Autonomous Mobile Robots & Logistics Conference (Memphis, TN)
Oct. 15-17 Fabtech (Orlando, FL)
Oct. 16-17 RoboBusiness (Santa Clara, CA)
Oct. 28-Nov. 1 ASTM Intl. Conference on Advanced Manufacturing (Atlanta, GA)
Nov. 22-24 Humanoids 2024 (Nancy, France)