Security Researcher Discovers Flaws In Yelp-For-MAGAs App, Developer Threatens To Report Him To The Deep State
from the shooting-the-messenger dept
Even a cursory look at past stories we’ve done about how companies treat security researchers who point out the trash-state of their products would reveal that entirely too many people and companies seem to think shooting the messenger is the best response. I have never understood the impulse to take people who are essentially stress-testing your software for free, ultimately pointing out how the product could be safer than it is, and then threatening those people with legal action or law enforcement. But, then, much of the world makes little sense to me.
Such as why a Yelp-for-MAGA people should ever be a thing. But it absolutely is a thing, with conservative news site 63red.com releasing a mobile app that is essentially a Yelp-clone, but with the twist that its chief purpose is to let other Trump supporters know how likely they are to be derided when visiting a restaurant. This is an understandable impulse, I suppose, given the nature of politics in 2019 America, though the need for an app seems like overkill. Regardless, the app was released and a security researcher found roughly all the security holes in it.
On Tuesday, a French infosec bod, going under the Mr Robot-themed pseudonym Elliot Alderson and handle fs0c131y, notified 63red that it had left hard-coded credentials in its Yelp-for-Trumpistas smartphone application, and that whoever built its backend APIs had forgotten to implement any meaningful form of authentication.
Alderson poked around inside the Android build of the app, and spotted a few of insecure practices, including the username and password of the programmer, and a lack of authentication on its backend APIs, allowing anyone to pull up user account information, and potentially slurp the app’s entire user database. It’s also possible to insert data into the backend log files, we’re told.
In other words, what 63red meant to build was an app to let Trump supporters know where they can go to feel safe. What it actually built was an app that tried to do that, but instead exposed user information to anyone who wanted to mine for it or, say, build a list of Trump supporters for reasons that could be entirely nefarious. Not great.
Nor was the reaction from 63red, which decided that Alderson pointing out its shoddy work warranted a threat to refer him to the FBI, AKA the Deep State.
“We see this person’s illegal and failed attempts to access our database servers as a politically-motivated attacked, and will be reporting it to the FBI later today,” 63red’s statement reads. “We hope that, just as in the case of many other politically-motivated internet attacks, this perpetrator will be brought to justice, and we will pursue this matter, and all other attacks, failed or otherwise, to the utmost extent of the law.”
63red described the privacy issues as a “minor problem,” and noted that no user passwords were exposed nor any user data changed.
For his part, Alderson took the threat of an FBI referral in full stride. Far from quaking in his boots, he simply pointed out that 63red’s security was so non-existent that he didn’t need to commit any crimes to do what he did.
“The FBI threat is a threat, I didn’t do anything illegal,” he told The Register. “I didn’t break or hack anything. Everything was open.”
And now this whole story is getting far greater coverage due to the threat than it would have had 63red simply, you know, secured their app based on the freely given information provided by a white hat security researcher.
I’m sure the folks using this app couldn’t feel more safe.
Filed Under: donald trump, maga, reviews, security, threats
Companies: 63red
Comments on “Security Researcher Discovers Flaws In Yelp-For-MAGAs App, Developer Threatens To Report Him To The Deep State”
I heard the app had been pulled from both Apple and Google’s stores due to its security issues, though I can’t find a good source at the moment.
Re: Re:
https://thehill.com/policy/technology/434025-app-stores-pull-yelp-for-conservatives-over-security-flaws
Re: Re:
https://arstechnica.com/information-technology/2019/03/yelp-but-for-maga-turns-red-over-security-disclosure-threatens-researcher/
Re: Re:
You heard wrong. It was actually due to a left-wing conspiracy to suppress conservative speech.
Re: Re: Re:
Cool story bro..
Re: Re: Re:
I suppose the Democrat Party DOES count as a left-wing conspiracy, in the loosest sense of the term…
Isn’t security researcher just a fancy name for cracker?!!
Re: Re:
Wouldn’t that distinction be based upon intent? I know the laws doesn’t make that distinction, but that is what is wrong with the law, not what is wrong with a legitimate researcher looking for the kinds of problems found in this app.
Re: Re: Re:
Wouldn’t an invitation in the way of a challenge for reward be the only permission an app developer might want to grant people tampering with his work?
Re: Re: Re: Re:
When one pays for something it is theirs to tamper with.
Re: Re: Re: Re:
If John Q. Appdeveloper develops an app, and I download it and put it on my computer, suddenly John is no longer the only stakeholder with an interest in whether or not the app is secure. My right to know whether John’s software is introducing security holes to my computer, and make informed decisions based on that knowledge, trumps John’s interest in hiding the truth in order to not be embarrassed by news getting out of his shoddy software development skills.
Re: Re: Re:2 Re:
No security holes were reported in the app. They’re all on the server… which still matters, because it could be your information that leaks, but it’s basically illegal for you to check whether a server’s secure. (Of course, if the app contains what looks like a server login+password, you can make an educated guess as to its security.)
This is almost echoing Stallman. We can hardly say this right exists when most apps ship without source code. This developer appears to have accidentally published it.
Re: Re: Re:3 Re:
Uh…
Also:
There are other ways of checking the security of an app, such as what APIs it calls.
Re: Re: Re: Re:
What tampering? It’s a JavaScript app. Its source code is included. "Alderson" looked at the source code and saw it had sensitive information in cleartext.
In what way is that tampering? If I hit Ctrl-U right now, am I "tampering" with Techdirt?
Re: Re: Re:2 Re:
Not all that long ago someone, I forget who, suggested that altering the URL in the browser "address bar" constituted hacking.
I imagine there are many more.
Re: Re: Re:3 Re:
It was a politician if I remember right.
Re: Re: Re:4
Unless my memory is grossly mistaken, it was the Nova Scotia government trying to accuse a teenager of "hacking" documents that were improperly published to their online database, by changing the ID number in the URL.
Re: Depends:
https://en.wikipedia.org/wiki/Cracker_%28term%29
Is this close enough? ; ]
Re: Re: Depends:
They don’t want to give creedence to us modern day ]]Crackers, I guess!
Re: Re: Depends:
Those are the ones using the app, not the ones breaking it.
Re: Re:
"Isn’t security researcher just a fancy name for cracker?!!"
No – not necessarily. One does not need to be proficient with a thing in order to ascertain that thing is grossly misconfigured.
Re: Re: Re:
Ok. That is reasonable to understand for the purpose of one’s own security one would want to take a look. But reverse engineering an app on one’s own could be violating its use. Intent would have to be determined in a court of law, would it not?
Re: Re: Re: Re:
Which is not what happened here. Nothing was reverse engineered, he looked at what was publicly available for anyone to view and accessed the app with API’s that the developer had left open for anyone to use.
It’s akin to putting all that information on their website and making each user account its own unique URL that you can access by typing in different letters in the URL.
Re: Re: Re: Re:
Violations of use are not illegal. All they can do is ban you from using it.
Re: Re: Re: Re:
Uh, he used APIs.
Re: Re: Re: Re:
Oh and there was no ‘reverse engineering’. They got open source code.
Re: Re: Re:2 Re:
Was it open source, or was it just bundled with source code? There’s a difference; "open source" has a specific definition that goes beyond just source code availability.
Re: Re: Re:3 Re:
You are indeed correct, although the difference between "source-available software" (where the source code is visible) and "open source software" (where the source code can be distributed and modified) wouldn’t make a difference in this case, as he wasn’t modifying or copying it.
Re: Re: Re:4 Re:
Right, I just think the distinction is an important one for people to know and understand, even if it’s not directly pertinent to this story.
Re: Re: Re: Re:
"But reverse engineering an app on one’s own could be violating its use."
It depends. Is it a contractual issue? Is it open source? Are EULAs enforceable?
When one purchases an item, it is theirs to do with as they so please unless they are aware of and agree to the terms which restrict same.
ianal – jic
Re: Re: Re:2
And even then, that is a civil matter, not criminal, unless there is unauthorized duplication and copyright laws get involved. As far as I have ever been able to tell, you can’t make it illegal for someone to find ways to view your code; you can only make it illegal to copy that code.
Re: Re: Re:3 Re:
afaik, Non commercial copyright infringement is a civil matter.
Also, simply making a copy is not "engineering" anything.
Re: Re:
Not really.
Crackers refer to anyone who attempts to break into software applications for any purpose, malicious or not.
Security researchers may perform cracking functions or not depending on the type of security research they are doing. Sometimes it’s hardware related, sometimes software, and sometimes they don’t need to crack anything since some idiots were dumb enough to just leave everything open. As in this case.
Re: Re: Re:
Your "crackers" definition makes no sense.
My personal definition of "cracker" is someone who attempts to defeat security protections (usually DRM) in a piece of software.
If they’re just modifying the software to perform a novel task, they’re a hacker, not a cracker.
As this software had no security protections in place (not even obscurity), it doesn’t count as cracking. Since he didn’t modify the software to do something novel, it’s not hacking.
All he did was read the code and query the API and do a security analysis of the results. So yeah, this was security research.
Re: Re: Re: Re:
Maybe we’re saying different words but meaning the same thing? From what I read in your comment, it’s no different than what I’m saying, just different words.
I guess I don’t see the difference between this and what I said:
By definition, breaking into software applications would require defeating security protections. I guess I just don’t see the difference here other than different words used to say the same thing.
I never said anything about modifying the software, just breaking into it, i.e. defeating security/DRM. That said, that’s not really what a hacker is either. From Wikipedia:
More recently it’s used in pop culture to refer to people who break into secure systems for nefarious purposes. But in reality it just means somebody really skilled with computers, typically computer programming.
Nor did I say it did. That was kind of the point of my entire comment in responding to the OP.
Which is also what I said. Note this part of my comment with emphasis added:
Re: Re: Re:2 Re:
That’s not recent at all. It’s been the accepted meaning of the term in common parlance for at least 30 years now. The definition you got from Wikipedia is incredibly dated and no one uses it in that sense anymore outside of Unix culture, which has always had trouble accepting that we’re no longer living in the 1970s.
Re: Re: Re:3 Re:
Damnit I’m getting old.
Re: Re: Re:3 Re:
"outside of Unix culture, which has always had trouble accepting that we’re no longer living in the 1970s."
Unix is still in use and remains useful for many purposes. Its share of mainframes maybe declining due to Linux replacing those installations where it is more cost effective … but in no way is Unix going away. What would those old bankers use to run their cobol? I suppose they could recompile on Linux but …..
Re: Re: Re:4 Re:
OS/360 or one of its successors. Recompilation is difficult when most of the holes are scrambled by a mouse nest in the card pack, never mind updating the application.
Re: Re: Re:5 Re:
It’s named z/OS, nowadays.
Re: Re: Re:4 Re:
I didn’t say Unix the specific product, I said Unix culture, which definitely includes Linux and other derivatives.
Re: Re: Re:3 Re:
Not at all recent, and still incredibly stupid.
Re: Re: Re: Re:
Let’s try some definitions
hacker – one who hacks
cracker – one who criminally hacks
gacker/ghacker – one who governmentally hacks
macker – one who militarily hacks
jihacker – one who jihadi hacks
cacker – one who commercially hacks
quacker – one who quasi hacks
lacker – one who lazily hacks
etc., etc., etc.
Re: Re: Re: Re:
"My personal definition"
If you have to make up the meaning of a word in order to believe you’re right about something, you may not actually be in the right.
Re: Re:
No.
Re: Re:
"Isn’t security researcher just a fancy name for cracker?!!"
I thought Cracker was a term for a Trump voter.
Re: Re: Re:
Its just a politic thing to want to steal words and apply their own twisted meanings to them!
Re: Re: Re: It’s cromulant to me damnit
Like using the words politic and steal when political and cultural appropriation would better serve your purpose?
Re: Re: Re:2 It’s cromulant to me damnit
I have a purpose? Dam nit is right!
Re: Re:
No. Next question.
Re: Re:
It really depends on how the information is used.
Warning the owner of the vulnerability – Security Researcher.
Posting the vulnerability on 4chan – Cracker.
Definitely biased but the outcome was already clear to me when MAGA is in the title. Of course they would shoot the messenger. They don’t have an issue with attacking journalists.
who's a snowflake now?
It seems like people would be more likely to use the app to avoid places where racists hang out than to feel safe wearing a symbol of racism.
Re: who's a snowflake now?
No need for the app – just have a look at how many pickup trucks are in the parking lot and how many have those silly fake balls hanging off the rear end.
Re: Re: who's a snowflake now?
Oh yeah the trucks with balls parked in a parking lot … like 0 zero? More like hitches for bitches with lots of horses hp.
It's a TRAPP!!!!
Re: It's a TRAPP!!!!
That was actually my first thought upon reading about the complete lack of security. If it was on purpose it was to get right-leaning people to sign up and self identify themselves.
When researchers find security flaws, they should go straight to the press, rather than trying to alert the companies who are likely to threaten them.
Also, what’s stopping anti-Trump people from using the app to find the MAGA crowd’s safe spaces? Do you need to have your MAGA-ness tested before you’re allowed to use it?
Re: Re:
Uhm, no. We have the responsible disclosure model which most researchers adhere to, because the alternative you suggest most likely means harm to the public in some way.
Now, if a company feels like they need to jerk the researchers around we have plenty of examples what happens with those companies – the phrase "Streisand effect" comes to mind among other things.
Re: Re: Re:
Responsible disclosure should be the following:
Private disclosure should apply only to those companies/organisations/developers who have stated upfront that any security bugs found will be accepted and dealt with appropriately with no repercussions to the reporting person or group.
If companies/organisations/developers do not have a public statement to that effect, then it is appropriate to report directly to the press or other groups for public dissemination of the security flaws.
If the public is adversely affected then we should expect that the public will “encouraged” to take their own security more seriously.
Re: Re: Re: Re:
So, if a company hasn’t stated upfront how to handle bugs it’s okay to publicly report the bugs even though the bug may be due to third-party code from someone that has a stated procedure for handling bugs? It may not be self-evident from the flaw who may be responsible for the code.
Sometimes the public can’t do anything with the affected piece of software/hardware to mitigate the problem. Sometimes the bug can only be fixed by a third party in the short term, just look at Spectre/Meltdown – you as a user of a computer couldn’t do anything to be 100% safe except unplugging your computer from the internet which rendered it pretty useless for most tasks until the maintainers of your OS of choice came with a mitigation patch. In this instance though, Intel and AMD had a procedure for handling bugs.
It’s all in the name, Responsible disclosure, which means that the entity doing the disclosure should do it in a responsible fashion so not to unnecessarily expose the public to risks even though the company responsible for fixing it may be asshats.
Re: Re: Re:2 Re:
If they are going to be supplying services of any kind on the internet, then they should be competent enough or have people who are competent enough to do this before they go online. If they are not, then they have not done their due diligence and they need to accept the consequences thereof.
It is a case of “stop killing the messenger” for your own actions. If I create an online system and do not have provision to handle found problems then it is on my head not on the head of someone else who reports the problem.
As was pointed out elsewhere, when you accept blame for your own actions, others have a greater difficulty controlling your response. If you blame others for your actions, they have control over you.
Re: Re: Re: Re:
Ask that guy who found all those dental records on an open server how well that fscking works out…
Re: Re: Re:
Far too often these days, the normal sequence of events is;
Person finds security flaw.
Person reports security flaw to company.
Company ignores problem and threatens person with criminal charges.
Story gets media attention.
Company is forced to deal with problem.
Re: Re: Re: Re:
You’re being quite charitable there, since you missed out the part where the flaw is usually exploited before they bother to patch it anyway.
Re: Re:
He told someone who represents a group known for attacking people who tell them things they don’t like to hear something they wouldn’t like to hear.
Doing it in public first is likely for protection, as now the public can see the full exchange from the start, and show clearly that any retaliation that may come against him (such as the standard alt-right harassment campaign, or a Jhonsmith-level fraudulent report to law enforcement) isn’t based on him having done any wrong.
For all the handwringing about how sacred our data is supposed to be, why isn’t there a simple law that punishes stupid?
Instead we lap up the breathless ‘ZOMG SUPER HACKERS!!!!!!! SEND THE FEDS!!!!!’ (hysterical given the source this time), believe them when they claim nothing bad can happen, & make another security researcher the target of a smear campagin.
This was so poorly coded, but we have laws allowing the government to black site the person who tried to warn the creator of their fsck ups. Then we ‘believe’ the story that nothing bad happened (they couldn’t figure out how to not hardcode devs login & password into it, we trust their review of if anything happened?) & move on.
Stop blaming the messengers & start penalties for shit coding.
Much like DMCA notices need it, when it costs them money to screw up they magically get better at doing it.
Illegal
Well this guy is an optimist considering someone was going to be prosecuted (he pled out) for updating a URL:
http://www.coreyvarma.com/2015/01/can-modifying-a-websites-url-land-you-in-prison-under-the-cfaa/
This was security analysis, not stress testing.
https://en.wikipedia.org/wiki/Stress_testing