France holds Google accountable
The first major US company to be charged with breaking GDPR regulations is Google, due to failing to disclose the collection and use of personal information to users. Additionally, Google reportedly didn’t obtain proper permission from these users to do things like expose them to personalised advertisements. The CNIL, a top data-privacy agency in France, has discovered Google’s violations. Google’s fee for violating France’s General Data Protection Regulations is a whopping near $57 million, and is resulting in many US tech giants rethinking the way they do things in the data-collection department, according to The Washington Post.
Under GDPR, companies like Google are required to give users an accessible idea of what data of theirs is being collected, as well as uncomplicated ways for users to either provide or deny consent to that data being collected. These are both areas in which Google violated GDPR, according to The Washington Post. The default setting for Google users when they create an account is set to display personalised ads to them, and although users can modify the settings when they do create an account, French regulators are not satisfied. Additionally, in order to sign up for a Google account, users are required to agree to the terms and conditions in full, or not use the service at all; another violation.
French regulators have been investigating Google since May of 2018, due to concerns brought to their attention by privacy activists. Apple and Facebook are other US-based well-known companies that have been punished by the E.U in recent years, bringing about criticisms of the Federal Trade Commission, according to The Washington Post.
Twitter’s Easy on the Eyes
Twitter is committed to saving one pair of eyes at a time, with their release of a battery-saving dark mode that is also expected to reduce eyestrain. Twitter CEO Jack Dorsey replied to a tweet complaining about the app’s current dark mode, declaring that an updated work is in progress. The dark mode currently modifies the screen to show the user’s timeline in a dark blue colour, rather than black.
This trend is gaining steam because many new phones, including the latest models of iPhones, have OLED screens that can mute its brightened pixels completely, which is good for both your eyes and your device’s battery. According to The Verge, this feature prevents people using their phones or computers in dark surroundings from squinting too much. This method of display is very common among applications and has erected a huge mob of support. It was originally released on the social media platform in 2016, and has gained traction, even making its way to print via The Wall Street Journal. The article called for all apps and devices to use the dark mode feature, for the additional reasons of lessening device addiction and improving sleep.
Artificial Intelligence and Datasets might help those on the stand
When asked about the presence of artificial intelligence in future years, a place that might have been left off many lists is the courtroom. A researcher from the Tolouse School of Economics and Tolouse Faculty of Law, Daniel L. Chen, is exploring using artificial intelligence to help correct the biased decisions of humans. Chen is familiar with judges and courts from his law and economics degrees, as well as the data he has collected over the years on the workings of the justice system. He has suggested using large datasets and artificial intelligence to help predict judges’ decisions, as well as moving them towards fairer sentences, according to The Verge.
Some biases Chen identified include the gambler’s fallacy, where a judge might try to overcorrect their ruling decisions if they feel they’ve been leaning too much to one side in recent cases. In this situation, the ruling of the previous case wrongly affects the ruling of my current case, as Chen described to The Verge. He also noted the common occurrence of judges disagreeing more and voting according to partisan lines during time periods of presidential election cycles.
Chen’s team has used what he calls “machine intelligence” to predict judge’s decisions in asylum cases, and his predictions of how the judges would rule were mostly accurate, based on very basic information--the identity of the judge, and the nationality of the person seeking asylum. Chen hopes to determine these ‘snap judgements’ using artificial intelligence and informing the judges of this fact, which might lead to a more careful contemplation. His goal is to combine a large dataset on the judge’s previous decisions as well as all potential factors unrelated to the direct case, and using artificial intelligence, analyse every relevant and non-relevant factor that could have potentially affected the decision made by the judge. These factors can include the record of their favorite sports teams, the weather, even the defendants’ birthdays, according to The Verge.
“More and more, people are using the tools of natural language processing and AI and big data with court opinions. That’s a promising area of research, and I’m interested in seeing how it translates into policy,” Chen said to The Verge.
Read more FDM views on Technology:
- Three Ways to Keep Up To Date with Technology