from the dysfunctional-by-design dept
A few months ago, the South Korean government strongly suggested parents load their children's cell phones up with government-approved spyware. It recommended an app called "Smart Sheriff." The app provided plenty of reassurance for parents, if said parents were willing to let the government look over their children's shoulder while they browsed the web, chatted about kid/teen things or otherwise engaged with their devices.
It also claimed to block porn, alert parents to budding sexuality and otherwise ensure no amount of phone use was left unreported. And, if South Korean parents somehow felt the government might be overstepping its bounds a bit, cell phone providers were obliged to hassle parents about underuse of the government-approved spy app.
Now, it appears that everything the mandated spyware grabs, it also leaks in one form or another. Citizen Lab (the same entity that sniffed out the connection between malware provider Hacking Team and blacklisted governments) has audited Smart Sheriff and has found its security measures to be mostly terrible. Not only does the recommended app not protect the transmission of personal data, but it doesn't even live up to the government's own standards for data and information security.
Citizen Lab has uncovered a plethora of flaws that make Smart Sheriff even worse than it was when it was simply government-approved spyware.
We identified twenty-six vulnerabilities and design issues that could lead to the compromise of user accounts, disclosure of information, and corruption of infrastructure. The same issues were often present in multiple parts of the application and infrastructure. For example, we identified a potential attack against user accounts via the Smart Sheriff mobile application, then determined that it could also be made against the Web-based parental administration site. These multiple flaws suggest that the application was not fully examined for security issues before being released. Both audits were done in a limited window of time and without access to the original source code.Smart Sheriff loads up on personal data during registration, demanding the phone numbers of both children and parents, along with the child's gender and date of birth. The information keeps flowing while in use, gathering data on apps installed and used, as well as browsing history. Then it transmits all of this information (some of it in plaintext) back to its storage, which is unencrypted. (This makes a certain sort of sense, considering the transmission of data is similarly unencrypted. Why lock it down in storage if you can't be bothered to arrange for its safe travel?)
What comes through as plaintext is the user's browser history. Visited sites are matched against a blocklist. (Strangely, no sites are actually blocked, as this function raised concerns about user privacy. But it still gathers the data, sends it in plaintext and stores it in unencrypted form. So these privacy concerns are sabotaged just as soon as they're addressed.) In order to match sites against its blocklist, the software edges around HTTPS protections to match the user to the site visited.
Beyond that, the software's authentication process can be decrypted by reverse engineering or decompiling the app. There's layer upon layer of inadequate security that adds up to a total catastrophe should anyone manage to make their way through any number of easily-prised doors.
The primary mechanism for authentication across the Smart Sheriff service is a device identifier that is derived using reversible obfuscation rather than industry-standard encryption. If an attacker is able to guess, enumerate, or intercept the device identifier of a phone with Smart Sheriff installed, the attacker can impersonate the application and undertake a range of attacks.Basically, the app is good enough for government work, as the saying goes. The government desires its public to have more control over the actions of their children. This, in turn, allows the government to have more control over the parents. The "do something" do-goodery we see in our own legislators is echoed here. In response, a "good enough" solution is mandated, even if it's not actually good enough. No one in charge of these mandates seems to care too much about the security flaws and gaping holes -- not even the company that made the app.
For example, using only the device identifier, an attacker can impersonate a user and request the parents’ phone number, children’s names, and their dates of birth. Moreover, an attacker can use the Smart Sheriff API to request a parent’s administration code (itself an insecure four-character string) and use it to take control of the account.
After our disclosure, MOIBA released an update to Smart Sheriff (v1.7.6) that includes communication over HTTPS. However this version does not properly validate the credentials received and appears to accept a self-signed certificate, which minimizes the update’s effectiveness.As Citizen Lab points out, the software does too much and too little, simultaneously, gathering the worst aspects of both. It fails to meet government guidelines on information security while going much further with surveillance and control than the government has actually mandated. The worst part of it is that the government has mandated use of the software, which gives citizens no option but to place its children's privacy in the hands of an entity that clearly has no respect for it. On top of that, it makes parental monitoring of children's cell phone use the new normal, which only makes it easier for the government to make further related demands down the road.