The argument is totally flawed and an oxymoron. It says the software doesn’t recognise colored people atall yet also claims that the colored people are picked up by law enforcement more often. They say the software is perfect but the people behind it are biased and also say that the software doesn’t have a heart/human and can take non human actions. Also the software is largely built by brown people although these companies are owned by white people, who don’t usually have a say in what datasets to be used on the AI they afe building.