Shooting incidents at schools in the United States are not rare. On average, this occurs almost weekly. In a recent shooting in May this year at a high school in Texas, ten students and teachers died. This was another reason to intervene. New ways are now being sought to curb this violence by pupils. New technologies and artificial intelligence are at the forefront.
Artificial intelligence prevention
The University of Alabama School of Continuing Education and Firma, a crisis management company, have developed a prevention program with AI, the BERTHA project. With AI they hope to find patterns in signals, such as behavior or language use of pupils, who predict possible violent behavior. This way the children can be selected who need help or intervention before they become suicidal or violent.
Often, after an incident, only the signals that indicated that a child could show violent behavior, such as online bullying or interest in weapons, are only checked. The idea of AI is precisely to determine these types of signals in advance. AI can help to quickly search big data, including social media and forums.
Signs of violent behavior
The AI analyzes language use, context, location and related links. On this basis a list is created for possible intervention. Counselors, psychologists and teachers can step in from here by examining whether these children can really pose a threat or whether the threat is innocent. It is important for a team to make these decisions, because a single person may miss out on signals.
AI is an initial screening, but ultimately it is up to people to determine whether and how an intervention should take place. Moreover, there are still challenges for the AI. Sarcastic and emotional language use is therefore difficult to identify. The advantage of this is that such a system will not quickly ignore signals or link them to a specific student. For example, a person will be able to remove a risk pupil from the risk list, because he or she often has outbursts. This will not indicate a computer.