Privacy and IndieWeb advocate. Launched HWC Brussels in 2017.
FR - EN - NL
IT - Python development - Internet Technologies
Media - Politics - Social Sciences
Science - History - Psychology
And lots of other random stuff.
Self-hosting - Privacy - Surveillance Capitalisme - Decentralization - Open Knowledge
Or check-out the links here under.
With elections coming up and quite a few cringe-worthy comments that have come from many of you and from all sides of the political spectrum, we figured it was time to have a chat about encryption.
Must read. Click on the title.
The main plaintiff in the case is Kyle Zak, who bought a $350 pair of wireless Bose headphones last month. He registered the headphones, giving the company his name and email address, as well as the headphone serial number. And he download the Bose Connect app, which the company said would make the headphones more useful by adding functions such as the ability to customize the level of noise cancellation in the headphones.
But it turns out the app was also telling Bose a lot more about Zak than he bargained for.
Click on the title to know more.
[In 2016], Privacy International, together with nine other international human rights NGOs, filed submissions with the European Court of Human Rights. Our case challenges the UK government’s bulk interception of internet traffic transiting fiber optic cables landing in the UK and its access to information similarly intercepted in bulk by the US government, which were revealed by the Snowden disclosures. To accompany our filing, we have produced two infographics to illustrate the complex process of “bulk interception.”
Read the article + complete infographics by clicking on the title.
To read the actual article, click on the title.
This company's activity shows us why we should be worried about all the data we disseminate in our daily lives, sometimes knowingly, sometimes not.
The main take-away about this, in my opinion: these algorithms can find out information about you that you never gave away, just by analysing the massive data trail you leave behind you at all times by using connected devices and browsing the web. These programs know more about you than you know yourself. And people are willing to profit from that, instead of using the data for good, for a more functional, open, collaborative society.
And this is not a fatality. We could, for example, stop using services that don't respect our privacy, and push forward solutions that do. Alternatives do exist!
Privacy by design should be the rule!
PDF report cited in the previous post.
The EFF’s (Electronic Frontier Foundation) report pulls together two years’ worth of research and data trying to find out whether educational technology (ed tech) companies are protecting students’ privacy. The answer is, unfortunately, largely not.
As adults, we all kind of have at least a vague peripheral sense that the devices and software we use are probably up to some kind of shenanigans with our personal data. Kids, however, are probably not thinking as closely about what they tell the devices they use, and what data those devices then share — especially if they’re school-owned tools. And yet, a new report finds, some of the learning technology schoolchildren are required to use every day are some of the worst when it comes to explaining and protecting users’ privacy.
The EFF’s new “Spying on Students” report [PDF] pulls together two years’ worth of research and data trying to find out whether educational technology (ed tech) companies are protecting students’ privacy. The answer is, unfortunately, largely not.
To read more, click on the title.
The same political leaders and legislators that once rebuked the NSA on the ethics of its mass surveillance practices, seem to now be taking a page out of the NSA’s playbook. This post surveys these three national legal frameworks, highlighting their troubling similarities, with the aim of showing how legislators from these countries are treading a dangerous line of surveillance expansion and overreach, paving the way for more European countries to follow in their footsteps. Indeed, European countries are increasingly chiming in to an ever-growing chorus of supporters for wholesale global surveillance in the name of perceived security. This rhetoric finds especially fertile ground in modern-day Europe, which has been engulfed by populist messaging surrounding the refugee crisis, immigration and heightened security threats. However, rushed and vague mass surveillance laws, while they might increase public approval ratings in the short term, are not a true panacea to the fundamental flaws in European intelligence cooperation that were exposed by the recent attacks.
Read more by clicking on the title
In "Spy Merchants", a new investigation by Al Jazeera, our undercover reporter worked for four months posing as a buyer for clients from countries including Iran and South Sudan - subjects of international sanctions - purchasing the kinds of surveillance systems that ensnared critics.
The investigation exposes illegal trade dealings that could put millions of citizens at risk of privacy violation today, as well as putting dissidents in vulnerable positions.
Online advertising is terrible. Ads clutter your screen, slow down your computer, and drain your batteries. Publishers saddle pages with tracking technology that vacuums up your data so they can, ostensibly, serve you more relevant ads (though this practice really just leads to serious privacy concerns). Sometimes ads even try to install malware on your computer.
But it doesn’t have to be this way.
To read more, click on the title of this post.
What do the election in Mexico, a hospital in California, baby monitors around the world and tinned fruit in Thailand have in common? They were all were involved in the great ‘cybersecurity’ failures of 2016. They also highlight the spectrum of cybersecurity issues that potentially impact us all: Governments, public services, companies, you and I.
The dizzying scale, technical complexity and downright panic accompanying ‘cyberattacks’ and data breaches often overshadow the fact that human rights are at the heart of cybersecurity, and that attacks mostly impact individuals. The personal information of over 93 million voters in Mexico, including home addresses, were openly published on the internet after being taken from a poorly secured government database. Up to 100,000 people are reportedly kidnapped in Mexico each year. A hospital in California had to cancel surgeries and move patients after attackers took down their network with ransomware. Internet connected devices such as baby monitors were reportedly co-opted by malware and utilised as part of a DDOS attack, which brought down popular websites including Twitter and The New York Times.
British NGO Privacy International recently published a series of State of Privacy reports, which aim to summarise privacy and surveillance laws and practices in a variety of countries. [...]
The result is that, in some parts of the world, the cybersecurity debate can undermine human rights and the international obligation on governments to protect them. Too quickly the debate turns to increasing state surveillance capacity, closing down transparency, criminalising legitimate behaviour and speech and undermining encryption rather than promoting it. For example, using encrypted messaging services is illegal in Pakistan, and using them in Morocco will land you in prison and a $10,000 fine. What constitutes certain crimes is unclear in the cybercrime laws of Jordan, Kenya and Tunisia. The Computer Misuse Act in Uganda has been used to criminally charge a journalist. These examples demonstrate the range of issues that appear in cybercrime laws presented as cybersecurity.
Approximately half of adult Americans’ photographs are stored in facial recognition databases that can be accessed by the FBI, without their knowledge or consent, in the hunt for suspected criminals. About 80% of photos in the FBI’s network are non-criminal entries, including pictures from driver’s licenses and passports. The algorithms used to identify matches are inaccurate about 15% of the time, and are more likely to misidentify black people than white people.
“[Facial recognition] can also be used by bad actors to harass or stalk individuals. It can be used in a way that chills free speech and free association by targeting people attending certain political meetings, protests, churches, or other types of places in the public.”
Inaccurate matching disproportionately affects people of color, according to studies. Not only are algorithms less accurate at identifying black faces, but African Americans are disproportionately subjected to police facial recognition.
“If you are black, you are more likely to be subjected to this technology, and the technology is more likely to be wrong,” said Elijah Cummings, a congressman for Maryland, who called for the FBI to test its technology for racial bias – something the FBI claims is unnecessary because the system is “race-blind”.
Even the companies that develop facial recognition technology believe it needs to be more tightly controlled. Brian Brackeen, CEO of Kairos, told the Guardian he was “not comfortable” with the lack of regulation. Kairos helps movie studios and ad agencies study the emotional response to their content and provides facial recognition in theme parks to allow people to find and buy photos of themselves.
Brackeen said that the algorithms used in the commercial space are “five years ahead” of what the FBI is doing, and are much more accurate.
“There has got to be privacy protections for the individual,” he said.