Security Researcher Discovers Flaws In Yelp-For-MAGAs App, Developer Threatens To Report Him To The Deep State

from the shooting-the-messenger dept

Even a cursory look at past stories we’ve done about how companies treat security researchers who point out the trash-state of their products would reveal that entirely too many people and companies seem to think shooting the messenger is the best response. I have never understood the impulse to take people who are essentially stress-testing your software for free, ultimately pointing out how the product could be safer than it is, and then threatening those people with legal action or law enforcement. But, then, much of the world makes little sense to me.

Such as why a Yelp-for-MAGA people should ever be a thing. But it absolutely is a thing, with conservative news site 63red.com releasing a mobile app that is essentially a Yelp-clone, but with the twist that its chief purpose is to let other Trump supporters know how likely they are to be derided when visiting a restaurant. This is an understandable impulse, I suppose, given the nature of politics in 2019 America, though the need for an app seems like overkill. Regardless, the app was released and a security researcher found roughly all the security holes in it.

On Tuesday, a French infosec bod, going under the Mr Robot-themed pseudonym Elliot Alderson and handle fs0c131y, notified 63red that it had left hard-coded credentials in its Yelp-for-Trumpistas smartphone application, and that whoever built its backend APIs had forgotten to implement any meaningful form of authentication.

Alderson poked around inside the Android build of the app, and spotted a few of insecure practices, including the username and password of the programmer, and a lack of authentication on its backend APIs, allowing anyone to pull up user account information, and potentially slurp the app’s entire user database. It’s also possible to insert data into the backend log files, we’re told.

In other words, what 63red meant to build was an app to let Trump supporters know where they can go to feel safe. What it actually built was an app that tried to do that, but instead exposed user information to anyone who wanted to mine for it or, say, build a list of Trump supporters for reasons that could be entirely nefarious. Not great.

Nor was the reaction from 63red, which decided that Alderson pointing out its shoddy work warranted a threat to refer him to the FBI, AKA the Deep State.

“We see this person’s illegal and failed attempts to access our database servers as a politically-motivated attacked, and will be reporting it to the FBI later today,” 63red’s statement reads. “We hope that, just as in the case of many other politically-motivated internet attacks, this perpetrator will be brought to justice, and we will pursue this matter, and all other attacks, failed or otherwise, to the utmost extent of the law.”

63red described the privacy issues as a “minor problem,” and noted that no user passwords were exposed nor any user data changed.

For his part, Alderson took the threat of an FBI referral in full stride. Far from quaking in his boots, he simply pointed out that 63red’s security was so non-existent that he didn’t need to commit any crimes to do what he did.

“The FBI threat is a threat, I didn’t do anything illegal,” he told The Register. “I didn’t break or hack anything. Everything was open.”

And now this whole story is getting far greater coverage due to the threat than it would have had 63red simply, you know, secured their app based on the freely given information provided by a white hat security researcher.

I’m sure the folks using this app couldn’t feel more safe.

Filed Under: , , , ,
Companies: 63red

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Security Researcher Discovers Flaws In Yelp-For-MAGAs App, Developer Threatens To Report Him To The Deep State”

Subscribe: RSS Leave a comment
67 Comments
Mason Wheeler (profile) says:

Re: Re: Re: Re:

If John Q. Appdeveloper develops an app, and I download it and put it on my computer, suddenly John is no longer the only stakeholder with an interest in whether or not the app is secure. My right to know whether John’s software is introducing security holes to my computer, and make informed decisions based on that knowledge, trumps John’s interest in hiding the truth in order to not be embarrassed by news getting out of his shoddy software development skills.

Anonymous Coward says:

Re: Re: Re:2 Re:

No security holes were reported in the app. They’re all on the server… which still matters, because it could be your information that leaks, but it’s basically illegal for you to check whether a server’s secure. (Of course, if the app contains what looks like a server login+password, you can make an educated guess as to its security.)

My right to know whether John’s software is introducing security holes to my computer

This is almost echoing Stallman. We can hardly say this right exists when most apps ship without source code. This developer appears to have accidentally published it.

Madd the Sane (profile) says:

Re: Re: Re:3 Re:

(Of course, if the app contains what looks like a server login+password, you can make an educated guess as to its security.)

Uh…

Alderson poked around inside the Android build of the app, and spotted a few of insecure practices, including the username and password of the programmer[…]

Also:

We can hardly say this right exists when most apps ship without source code. This developer appears to have accidentally published it.

There are other ways of checking the security of an app, such as what APIs it calls.

Anonymous Coward says:

Re: Re: Re: Re:

Which is not what happened here. Nothing was reverse engineered, he looked at what was publicly available for anyone to view and accessed the app with API’s that the developer had left open for anyone to use.

It’s akin to putting all that information on their website and making each user account its own unique URL that you can access by typing in different letters in the URL.

Qwertygiy says:

Re: Re: Re:3 Re:

You are indeed correct, although the difference between "source-available software" (where the source code is visible) and "open source software" (where the source code can be distributed and modified) wouldn’t make a difference in this case, as he wasn’t modifying or copying it.

Anonymous Coward says:

Re: Re: Re: Re:

"But reverse engineering an app on one’s own could be violating its use."

It depends. Is it a contractual issue? Is it open source? Are EULAs enforceable?

When one purchases an item, it is theirs to do with as they so please unless they are aware of and agree to the terms which restrict same.

ianal – jic

Anonymous Coward says:

Re: Re: Re:2

And even then, that is a civil matter, not criminal, unless there is unauthorized duplication and copyright laws get involved. As far as I have ever been able to tell, you can’t make it illegal for someone to find ways to view your code; you can only make it illegal to copy that code.

Anonymous Coward says:

Re: Re:

Not really.

Crackers refer to anyone who attempts to break into software applications for any purpose, malicious or not.

Security researchers may perform cracking functions or not depending on the type of security research they are doing. Sometimes it’s hardware related, sometimes software, and sometimes they don’t need to crack anything since some idiots were dumb enough to just leave everything open. As in this case.

Anonymous Coward says:

Re: Re: Re:

Your "crackers" definition makes no sense.

My personal definition of "cracker" is someone who attempts to defeat security protections (usually DRM) in a piece of software.

If they’re just modifying the software to perform a novel task, they’re a hacker, not a cracker.

As this software had no security protections in place (not even obscurity), it doesn’t count as cracking. Since he didn’t modify the software to do something novel, it’s not hacking.

All he did was read the code and query the API and do a security analysis of the results. So yeah, this was security research.

Anonymous Coward says:

Re: Re: Re: Re:

Your "crackers" definition makes no sense.

Maybe we’re saying different words but meaning the same thing? From what I read in your comment, it’s no different than what I’m saying, just different words.

My personal definition of "cracker" is someone who attempts to defeat security protections (usually DRM) in a piece of software.

I guess I don’t see the difference between this and what I said:

anyone who attempts to break into software applications for any purpose, malicious or not.

By definition, breaking into software applications would require defeating security protections. I guess I just don’t see the difference here other than different words used to say the same thing.

If they’re just modifying the software to perform a novel task, they’re a hacker, not a cracker.

I never said anything about modifying the software, just breaking into it, i.e. defeating security/DRM. That said, that’s not really what a hacker is either. From Wikipedia:

A computer hacker is any skilled computer expert that uses their technical knowledge to overcome a problem.

More recently it’s used in pop culture to refer to people who break into secure systems for nefarious purposes. But in reality it just means somebody really skilled with computers, typically computer programming.

As this software had no security protections in place (not even obscurity), it doesn’t count as cracking.

Nor did I say it did. That was kind of the point of my entire comment in responding to the OP.

All he did was read the code and query the API and do a security analysis of the results. So yeah, this was security research.

Which is also what I said. Note this part of my comment with emphasis added:

Security researchers MAY perform cracking functions OR NOT depending on the type of security research they are doing.

Mason Wheeler (profile) says:

Re: Re: Re:2 Re:

More recently it’s used in pop culture to refer to people who break into secure systems for nefarious purposes.

That’s not recent at all. It’s been the accepted meaning of the term in common parlance for at least 30 years now. The definition you got from Wikipedia is incredibly dated and no one uses it in that sense anymore outside of Unix culture, which has always had trouble accepting that we’re no longer living in the 1970s.

Anonymous Coward says:

Re: Re: Re:3 Re:

"outside of Unix culture, which has always had trouble accepting that we’re no longer living in the 1970s."

Unix is still in use and remains useful for many purposes. Its share of mainframes maybe declining due to Linux replacing those installations where it is more cost effective … but in no way is Unix going away. What would those old bankers use to run their cobol? I suppose they could recompile on Linux but …..

Anonymous Coward says:

Re: Re: Re: Re:

Let’s try some definitions

hacker – one who hacks

cracker – one who criminally hacks

gacker/ghacker – one who governmentally hacks

macker – one who militarily hacks

jihacker – one who jihadi hacks

cacker – one who commercially hacks

quacker – one who quasi hacks

lacker – one who lazily hacks

etc., etc., etc.

Anonymous Coward says:

Re: Re:

It really depends on how the information is used.
Warning the owner of the vulnerability – Security Researcher.
Posting the vulnerability on 4chan – Cracker.

Definitely biased but the outcome was already clear to me when MAGA is in the title. Of course they would shoot the messenger. They don’t have an issue with attacking journalists.

Rekrul says:

When researchers find security flaws, they should go straight to the press, rather than trying to alert the companies who are likely to threaten them.

Also, what’s stopping anti-Trump people from using the app to find the MAGA crowd’s safe spaces? Do you need to have your MAGA-ness tested before you’re allowed to use it?

Rocky says:

Re: Re:

When researchers find security flaws, they should go straight to the press, rather than trying to alert the companies who are likely to threaten them.

Uhm, no. We have the responsible disclosure model which most researchers adhere to, because the alternative you suggest most likely means harm to the public in some way.

Now, if a company feels like they need to jerk the researchers around we have plenty of examples what happens with those companies – the phrase "Streisand effect" comes to mind among other things.

Anonymous Coward says:

Re: Re: Re:

Responsible disclosure should be the following:

Private disclosure should apply only to those companies/organisations/developers who have stated upfront that any security bugs found will be accepted and dealt with appropriately with no repercussions to the reporting person or group.

If companies/organisations/developers do not have a public statement to that effect, then it is appropriate to report directly to the press or other groups for public dissemination of the security flaws.

If the public is adversely affected then we should expect that the public will “encouraged” to take their own security more seriously.

Rocky says:

Re: Re: Re: Re:

Private disclosure should apply only to those companies/organisations/developers who have stated upfront that any security bugs found will be accepted and dealt with appropriately with no repercussions to the reporting person or group.

So, if a company hasn’t stated upfront how to handle bugs it’s okay to publicly report the bugs even though the bug may be due to third-party code from someone that has a stated procedure for handling bugs? It may not be self-evident from the flaw who may be responsible for the code.

If the public is adversely affected then we should expect that the public will "encouraged" to take their own security more seriously.

Sometimes the public can’t do anything with the affected piece of software/hardware to mitigate the problem. Sometimes the bug can only be fixed by a third party in the short term, just look at Spectre/Meltdown – you as a user of a computer couldn’t do anything to be 100% safe except unplugging your computer from the internet which rendered it pretty useless for most tasks until the maintainers of your OS of choice came with a mitigation patch. In this instance though, Intel and AMD had a procedure for handling bugs.

It’s all in the name, Responsible disclosure, which means that the entity doing the disclosure should do it in a responsible fashion so not to unnecessarily expose the public to risks even though the company responsible for fixing it may be asshats.

Anonymous Coward says:

Re: Re: Re:2 Re:

If they are going to be supplying services of any kind on the internet, then they should be competent enough or have people who are competent enough to do this before they go online. If they are not, then they have not done their due diligence and they need to accept the consequences thereof.

It is a case of “stop killing the messenger” for your own actions. If I create an online system and do not have provision to handle found problems then it is on my head not on the head of someone else who reports the problem.

As was pointed out elsewhere, when you accept blame for your own actions, others have a greater difficulty controlling your response. If you blame others for your actions, they have control over you.

Rekrul says:

Re: Re: Re:

Uhm, no. We have the responsible disclosure model which most researchers adhere to, because the alternative you suggest most likely means harm to the public in some way.

Now, if a company feels like they need to jerk the researchers around we have plenty of examples what happens with those companies – the phrase "Streisand effect" comes to mind among other things.

Far too often these days, the normal sequence of events is;

Person finds security flaw.
Person reports security flaw to company.
Company ignores problem and threatens person with criminal charges.
Story gets media attention.
Company is forced to deal with problem.

Toom1275 (profile) says:

Re: Re:

He told someone who represents a group known for attacking people who tell them things they don’t like to hear something they wouldn’t like to hear.

Doing it in public first is likely for protection, as now the public can see the full exchange from the start, and show clearly that any retaliation that may come against him (such as the standard alt-right harassment campaign, or a Jhonsmith-level fraudulent report to law enforcement) isn’t based on him having done any wrong.

That Anonymous Coward (profile) says:

For all the handwringing about how sacred our data is supposed to be, why isn’t there a simple law that punishes stupid?

Instead we lap up the breathless ‘ZOMG SUPER HACKERS!!!!!!! SEND THE FEDS!!!!!’ (hysterical given the source this time), believe them when they claim nothing bad can happen, & make another security researcher the target of a smear campagin.

This was so poorly coded, but we have laws allowing the government to black site the person who tried to warn the creator of their fsck ups. Then we ‘believe’ the story that nothing bad happened (they couldn’t figure out how to not hardcode devs login & password into it, we trust their review of if anything happened?) & move on.

Stop blaming the messengers & start penalties for shit coding.
Much like DMCA notices need it, when it costs them money to screw up they magically get better at doing it.

nasch (profile) says:

Illegal

"The FBI threat is a threat, I didn’t do anything illegal," he told The Register. "I didn’t break or hack anything. Everything was open."

Well this guy is an optimist considering someone was going to be prosecuted (he pled out) for updating a URL:

http://www.coreyvarma.com/2015/01/can-modifying-a-websites-url-land-you-in-prison-under-the-cfaa/

I have never understood the impulse to take people who are essentially stress-testing your software for free

This was security analysis, not stress testing.

https://en.wikipedia.org/wiki/Stress_testing

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...