This study offers a systematic literature review on the use of soft computing techniques for abuse detection in the complex cyber’“physical’“social big data systems in cognitive smart cities. The objective of the authors is to define and identify the diverse concept of abuse and systematize techniques for automatic abuse detection on social media and real-time abuse detection using IoT.
According to the World Health Organization, interpersonal violence is a leading cause of impaired quality of life and mortality in the world, especially among people between 15 and 44’‰years. Interpersonal abuse and violence are a pattern of behaviour used to establish power and control over another person through fear and intimidation, often including the threat or use of violence. According to the authors, it can take many forms such as verbal or emotional, physical, sexual, digital abuse, stalking (online and in-person) and economic abuse. The increasing availability of reasonable data services and social media presence has given some uninhibited effects where online users have discovered wrong & unlawful ways to harm and humiliate individuals through hateful comments on online platforms or apps. Technology allows the perpetrators to be anonymous, hard to trace and insulated from confrontation. At the same time, the persistence, audience size and damage speed make cyber abuse even more damaging than face-to-face abuse causing serious mental health and wellbeing issues to victims and making them feel totally overwhelmed.
According to this paper, researchers worldwide have been trying to develop new ways to detect online abuse, manage it and reduce its prevalence on social media. Advanced analytical methods and computational models for efficient processing, analysis and modelling for detecting such bitter, taunting, abusive or negative content in images, memes or text messages are considered imperative in this research.
This study helps to establish the need to capture situational context and awareness in real-time and foster the need to develop a proactive as well as reactive safety mechanism to mitigate the risks of online abuse in a cognitive smart city.
The analysis is based on the following research questions:
- RQ1: Which online activities on social media qualify as abusive behaviour?
- RQ2: Which in-person abusive activities can be captured using cyber-physical systems?
- RQ3: Which soft computing techniques for automatic cyber abuse, cyber-hate and cyberbullying real-time abuse detection have been used?
- RQ4: Which datasets have been used for automatic cyber abuse, cyber-hate and cyberbullying detection?
- RQ5: Which real-time variables have been recorded to detect abuse in tangible environments?
The literature studied in this paper is analysing the following terms:
- Cyber abuse (CA) studies on SMPs, which have further been categorized as cyberhate (CH) and cyberbullying (CB)
- Real-time abuse (RTA) studies using Internet of cyber-physical systems (IoTs)
According to the authors, the big data from cyber-physical systems and social media platforms can be used for predictive policing to identify potential criminal activities, abuse, offenders, and victims of abuse.
The social media and IoT-based ubiquitous technologies in the connected cognitive smart city can assist a prompt and reliable solution to protect the individual from real or perceived harm. In this study firstly, the studies on cyber abuse detection in the big data of user-generated content on social media is reviewed. The studies are analysed, compared and categorized into cyber-hate, cyberbullying and generic cyber abuse categories. Secondly, the studies on real-time abuse are reviewed to understand the role of sensor based cyber-physical device data for automated detection.
The purpose of this study is to evaluate the progress made so far, identify the trends and research gaps in studies to ascertain the future scope of research within the domain. The study establishes that the goal of using soft computing (machine learning, deep learning) at a certain level (the device, the app, or ideally the cloud) should be to preempt victimization on social media by (a) identifying (and blocking, banning, or quarantining) the most problematic users and accounts, or (b) immediately collapsing or deleting content that algorithms predictively flag and label as abusive, and use technology to better serve communities.
You can find the paper here.