What is the difference between Israel and China to make it morally right for you to support one genocidal regime and not the other?
Arkouda
Can you elaborate?
I just did.
None of this is my opinion, it’s just how the world works LOL
This may be of some use to you.
https://www.merriam-webster.com/dictionary/elaborate
Which Government?
I already answered this one as well.
The gov typically need some sort of warrant, and they need approval from the country they’re requesting it from.
United States of America? Canada? North Korea? China? Australia? Saudi Arabia? South Africa? Brazil?
The point is the app was designed for secure communication, specifically from corrupt governments, which is why it is problematic to allow access to user data as long as the individual is breaking a law in that country.
Or to use the example from the top:
So who gets to pick what’s a lawful request and criminal activity? It’s criminal in some states to seek an abortion or help with an abortion, so would they hand out the IPs of those “criminals”? Because depending on who you ask some will tell you they’re basically murderers. And that’s just one example.
None of this is my opinion, it’s just how the world works LOL
Can you elaborate?
Not necessarily, but kinda. The gov typically need some sort of warrant and they need approval from the country they’re requesting it from.
Which Government?
Pardon my ignorance as this is my first time using the internet, but I am pretty sure that every Government on the planet does not use a universal set of laws or procedures for enforcement.
The country in which the perpetrator lives or the crime was committed. First time using the internet?
In your opinion, all companies must disclose the personal information of customers whenever a Government says "This person broke the law"?
The…law?
In which country?
I agree with you there. The second hand market is wonderful for finding ridiculous deals on things people just want gone.
The only problem with the second hand market is the effort needed for it. That effort keeps people from considering it a viable option for goods in the same way the effort to find another store made OP B-line to Amazon.
I enjoy this narrative of "being forced" to go against ones own morals and principals by big bad companies because one just absolutely has to have a product for as cheap as possible.
You went to two stores and then straight to Amazon. That doesn't mean they have a monopoly, that means you really didn't try that hard to find an alternative.
If you think you have no other choice you are right because you stopped looking for one.
It still is.
The point is that your argument falls apart considering it kept being propped up by your assertion that kids can just use the library computers if they’re too poor to have a computer at home.
But that doesn’t matter; you’re not actually here to debate in good faith.
Debate Pervert: “When our position on an issue is no longer based on curiosity and the desire for the truth, but a desire to win a debate. When someone reaches this stage of discourse, there’s no need to try and persuade them.”
That wasn't my argument, and is still a viable option. Libraries still exist.
My point and argument was: It is the schools decision on what happens with school hardware.
Have any thing to say to my point without being combative? Or do I add you to the pile of people not worth interacting with in the future?
Sounds like a bigger problem than schools monitoring the use of devices issued to children.
Might want to get that sorted.
Allow me to make it easier for you by posting the entire article and you can point to me where it says " the use of this invasive software is required in order to attend public school".
Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantaged, minority and LGBTQ youth.
The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good.
That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety?
Experts suggest it's because these supposed technical solutions are easier to implement than the effective social measures that schools often lack resources to implement. I spoke with Isabelle Barbour, a public health consultant who has experience working with schools to implement mental health supports. She pointed out that there are considerable barriers to families, kids, and youth accessing health care and mental health supports at a community level. There is also a lack of investment in supporting schools to effectively address student health and well-being. This leads to a situation where many students come to school with needs that have been unmet and these needs impact the ability of students to learn. Although there are clear and proven measures that work to address the burdens youth face, schools often need support (time, mental health expertise, community partners, and a budget) to implement these measures. Edtech companies market largely unproven plug-and-play products to educational professionals who are stretched thin and seeking a path forward to help kids. Is it any wonder why schools sign contracts which are easy to point to when questioned about what they are doing with regard to the youth mental health epidemic?
One example: Gaggle in marketing to school districts claims to have saved 5,790 student lives between 2018 and 2023, according to shaky metrics they themselves designed. All the while they keep the inner-workings of their AI monitoring secret, making it difficult for outsiders to scrutinize and measure its effectiveness. We give Gaggle an “F”
Reports of the errors and inability of the AI flagging to understand context keep popping up. When the Lawrence, Kansas school district signed a $162,000 contract with Gaggle, no one batted an eye: It joined a growing number of school districts (currently ~1,500) nation-wide using the software. Then, school administrators called in nearly an entire class to explain photographs Gaggle’s AI had labeled as “nudity” because the software wouldn’t tell them:
“Yet all students involved maintain that none of their photos had nudity in them. Some were even able to determine which images were deleted by comparing backup storage systems to what remained on their school accounts. Still, the photos were deleted from school accounts, so there is no way to verify what Gaggle detected. Even school administrators can’t see the images it flags.”
Young journalists within the school district raised concerns about how Gaggle’s surveillance of students impacted their privacy and free speech rights. As journalist Max McCoy points out in his article for the Kansas Reflector, “newsgathering is a constitutionally protected activity and those in authority shouldn’t have access to a journalist’s notes, photos and other unpublished work.” Despite having renewed Gaggle’s contract, the district removed the surveillance software from the devices of student journalists. Here, a successful awareness campaign resulted in a tangible win for some of the students affected. While ad-hoc protections for journalists are helpful, more is needed to honor all students' fundamental right to privacy against this new front of technological invasions. Tips for Students to Reclaim their Privacy
Students struggling with the invasiveness of school surveillance AI may find some reprieve by taking measures and forming habits to avoid monitoring. Some considerations:
Consider any school-issued device a spying tool.
Don’t try to hack or remove the monitoring software unless specifically allowed by your school: it may result in significant consequences from your school or law enforcement.
Instead, turn school-issued devices completely off when they aren’t being used, especially while at home. This will prevent the devices from activating the camera, microphone, and surveillance software.
If not needed, consider leaving school-issued devices in your school locker: this will avoid depending on these devices to log in to personal accounts, which will keep data from those accounts safe from prying eyes.
Don’t log in to personal accounts on a school-issued device (if you can avoid it - we understand sometimes a school-issued device is the only computer some students have access to). Rather, use a personal device for all personal communications and accounts (e.g., email, social media). Maybe your personal phone is the only device you have to log in to social media and chat with friends. That’s okay: keeping separate devices for separate purposes will reduce the risk that your data is leaked or surveilled.
Don’t log in to school-controlled accounts or apps on your personal device: that can be monitored, too.
Instead, create another email address on a service the school doesn’t control which is just for personal communications. Tell your friends to contact you on that email outside of school.
Finally, voice your concern and discomfort with such software being installed on devices you rely on. There are plenty of resources to point to, many linked to in this post, when raising concerns about these technologies. As the young journalists at Lawrence High School have shown, writing about it can be an effective avenue to bring up these issues with school administrators. At the very least, it will send a signal to those in charge that students are uncomfortable trading their right to privacy for an elusive promise of security. Schools Can Do Better to Protect Students Safety and Privacy
It’s not only the students who are concerned about AI spying in the classroom and beyond. Parents are often unaware of the spyware deployed on school-issued laptops their children bring home. And when using a privately-owned shared computer logged into a school-issued Google Workspace or Microsoft account, a parent’s web search will be available to the monitoring AI as well.
New studies have uncovered some of the mental detriments that surveillance causes. Despite this and the array of First Amendment questions these student surveillance technologies raise, schools have rushed to adopt these unproven and invasive technologies. As Barbour put it:
“While ballooning class sizes and the elimination of school positions are considerable challenges, we know that a positive school climate helps kids feel safe and supported. This allows kids to talk about what they need with caring adults. Adults can then work with others to identify supports. This type of environment helps not only kids who are suffering with mental health problems, it helps everyone.”
We urge schools to focus on creating that environment, rather than subjecting students to ever-increasing scrutiny through school surveillance AI.
If social media is bad for your brain why is lemmy, a social media site, not?