Google received 35,191 complaints from users and removed 93,550 pieces of content based on those complaints in the month of August, the tech giant said in its monthly transparency report. In addition to reports from users, Google also removed 651,933 pieces of content in August as a result of automated detection.Google had received 36,934 complaints from users and removed 95,680 pieces of content based on those complaints in July. It had removed 5,76,892 pieces of content in July as a result of automated detection.The US-based company has made these disclosures as part of compliance with India’s IT rules that came into force on May 26. Google, in its latest report, said it had received 35,191 complaints in August from individual users located in India via designated mechanisms, and the number of removal actions as a result of user complaints stood at 93,550.These complaints relate to third-party content that is believed to violate local laws or personal rights on Google’s significant social media intermediaries (SSMI) platforms, the report said.Some requests may allege infringement of intellectual property rights, while others claim violation of local laws prohibiting types of content on grounds such as defamation. When we receive complaints regarding content on our platforms, we assess them carefully,” it added.The content removal was done under several categories, including copyright (92,750), trademark (721), counterfeit (32), circumvention (19), court order (12), graphic sexual content (12) and other legal requests (4).Google explained that a single complaint may specify multiple items that potentially relate to the same or different pieces of content, and each unique URL in a specific complaint is considered an individual “item” that is removed.Google said in addition to reports from users, the company invests heavily in fighting harmful content online and use technology to detect and remove it from its platforms.”This includes using automated detection processes for some of our products to prevent the dissemination of harmful content such as child sexual abuse material and violent extremist content.”We balance privacy and user protection to: quickly remove content that violates our Community Guidelines and content policies; restrict content (e.g., age-restrict content that may not be appropriate for all audiences); or leave the content live when it doesn’t violate our guidelines or policies,” it added.