The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

It's Long Past Time To Encrypt The Entire DNS

from the privacy-and-encryption dept

With work, school and healthcare moving online, data privacy and security has never been more important. Who can see what we’re doing online? What are corporations and government agencies doing with this information? How can our online activity be better protected? One answer is: encryption. Strong encryption has always been an important part of protecting and promoting our digital rights.

The majority of your web traffic is already encrypted. That’s the padlock in your URL bar; the the S –for “secure”– in HTTPS. This baseline of encryption is the result of decades of dedicated work by privacy-concerned technologists aiming to safeguard users’ personal information and address pressing demands for data and transaction safety. Web traffic encryption allows us to feel confident when we buy or bank online, access our medical records, and communicate on social media.

Unfortunately, there’s a geyser of internet traffic that remains unencrypted, leaving our personal information still vulnerable to exploitation. Every day through a seamless process, our computers and phones make thousands of lookups through the Domain Name System (DNS). DNS is the way computers and phones find the IP address for any internet resource you want to access, whether it’s a website and all the content it contains, or an online messaging service, or the background connections made through mobile apps.

Thanks to the DNS, you can type in a memorable URL (cnn.com) instead of having to remember a long string of numbers (like 151.101.193.67, one of CNN’s IP addresses) to visit a website.

But while most of your web traffic is encrypted, your DNS lookups probably aren’t. The architects of the DNS system designed it in the 1980s, long before it became apparent that some would exploit this design for their own gain—or that repressive regimes would use it to censor and stifle dissidents.

The privacy concerns are easy to understand. Many of the domains you visit might be descriptive enough to give away what you’re doing on a particular web site or service—whether they are partisan political websites (“this person is a Republican!”), mortgage lenders (“this person wants to refinance!”), health websites (“this person seems to have a medical condition we can monetize!”), or certain websites you'd rather keep private. In other words, someone in the network sitting between you and a certain website might not know what you’re doing on a website—but they know you’re doing it on that website!

This enables the daily commercial exploitation of consumer data. As we speak, corporations can exploit the DNS to track and monetize your online activity. Thanks to the loosening of U.S. federal broadband privacy laws in 2017, Internet service providers (ISPs) like Verizon, ComcastXfinity and CharterSpectrum are allowed to bundle and sell this lookup data to data brokers so they can build better personal and behavioral profiles—which are then rented out to companies that want to target you with personalized ads and appeals. For vulnerable communities, however, this infringement on privacy can lead to deeper erosion of other rights when, for example, analysis of someone’s online history profiles them as being “under-banked”, “financially vulnerable” or as targets for predatory loan offers. It’s a bit like a librarian selling your reading history to a psychologist.

Moreover, while DNS is an essential point of control for network administrators and service providers, that control can be problematic. On one hand: the DNS enables the implementation of important mechanisms from malware identification, to enforcement of corporate and local policies, to monitoring and testing of different network tools. On the other hand, if you as a user are trying to access some information during a period of social unrest, a government wanting to prevent you from accessing that information could force ISPs to block that content or tamper with the DNS responses your computer gets. Because DNS lookups also expose your IP address and MAC address (the hardware address of your device), they could also gain insight on your device’s location.  

On top of all that, the vulnerability of the DNS system is also a security issue: A 2016 Infoblox Security Assessment Report found that 66% of DNS traffic was subject to suspicious exploits and security threats, from protocol anomalies (48%) to distributed denial of service (DDoS) attacks (14%). The study also showed that the biggest concerns for ISPs were downtime and loss of sensitive data, which translates into users not being able to access the online resources they need, or sensitive data of users’ lookups being leaked or stolen.

Thankfully, new technical protocols for encrypted DNS that directly address these issues are on the rise;. Encrypted DNS protects access to resources and the data integrity of DNS queries by preventing DNS packet inspection and actions trying to tamper with the DNS responses your computer gets. It shields against leaks of user data like IP/MAC addresses and domains, keeping users from being tracked and monitored, and makes it difficult for censoring bodies to be able to intercept and block the content you can access.

Some technology companies and ISPs are already ahead of the curve and working on protecting their users. In 2019, Mozilla published its Resolver Policy for listing DNS-over-HTTPS (DoH) providers in Firefox’s settings options, followed by Comcast launching their Encrypted DNS Deployment Initiative (EDDI), and by Google defining the requirements to list DoH providers in Chrome’s settings.

These are not the only companies starting to take action in protecting users' online data, but many more need to step up. And for DoH there’s no time like the present: the currently low number of devices using DoH eases the adoption curve for ISPs testing and deploying encrypted DNS services, making the implementation of updates and maintenance easier for early adopters, while, on the other hand, as the number of devices using these services goes up, more edge cases will be discovered and the same functions will become increasingly more difficult.

ISPs that prioritize data privacy can distinguish themselves with customers, partners and civil society. By taking steps to safely deploy secure and encrypted DNS communications to protect their users, ISPs like Comcast have taken the lead and increased goodwill with activists, technologists and vendors. ISPs that don’t adopt privacy-preserving measures will remain subject to increasing public scrutiny and critique. ISPs implementing their own encrypted DNS services will also avoid reliance on third-party implementations and increase DNS decentralization, to everyone's benefit.

Our global reality has been forever altered in the wake of this pandemic. Many of us are living most of our lives online. Inequities and exploitation that had been ignored have come into sharp focus, and the needs of a society in civil unrest add to the many reasons why the privacy and security of individuals is a right that needs to be enhanced and protected.

More than ever, customers are paying close attention to the companies that respect them, their families and their rights. DNS providers and ISPs must work together on the implementation and deployment of measures that will strengthen DNS. Choosing short-term profit over people is a losing business proposition, and the first movers will reap even larger rewards in consumer trust.

Joey Salazar is a software engineer, open source developer and Senior Programme Officer at Article 19, where she leads the IETF engagement program focusing on policies, standards, and protocol implementations.

Benjamin Moskowitz is the Director of Consumer Reports' Digital Lab, which conducts rigorous research and testing of connected products and advocates for consumers' rights online (lab.cr.org).

Filed Under: dns, encrypted dns, encryption, privacy


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Koby (profile), 22 Jun 2020 @ 12:21pm

    ISPs that prioritize data privacy can distinguish themselves with customers, partners and civil society.

    If there's competition, then yes. But for many areas with an ISP monopoly or duopoly, then rollout is going to be slow, or perhaps nonexistent. This is why Mozilla is taking the lead over ISPs.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Jun 2020 @ 12:32pm

      Re:

      You do not have to use your ISP's DNS, and can choose one that is more respecting of privacy. Firefox already allows you to use DNS over HTTPS.

      reply to this | link to this | view in chronology ]

      • icon
        Koby (profile), 22 Jun 2020 @ 12:39pm

        Re: Re:

        If you use another DNS other than the service from your ISP, and the DNS is not encrypted, then I believe an unscrupulous ISP could still monitor, collect, and sell the data. While not perfect, DNS over Http is a step in the right direction.

        I would just like for there to be more competition. I say it would begin to solve a lot of problems, like DNS privacy. Without competition, outsiders like Mozilla will be the most disruptive factor in this space.

        reply to this | link to this | view in chronology ]

        • icon
          crinisen (profile), 22 Jun 2020 @ 2:27pm

          Re: Re: Re:

          I would just like for there to be more competition. I say it would begin to solve a lot of problems, like DNS privacy. Without competition, outsiders like Mozilla will be the most disruptive factor in this space.

          Honestly, I don't care how many options I have. I don't want any of my phone companies being able to record my calls and I don't see why my ISP should be able to record my traffic. Of course the difference is we say that phone calls are protected and the internet is not. I agree that DNS over TLS or HTTPS are good ideas to combat rogue actors. I just have this silly idea that no provider should be able to listen in or record my communications. It does not matter if it is a phone call, a letter, or an IP packet.

          Before anyone makes a comment about traffic engineering, I am a network engineer and have worked for ISPs in the past. There is a HUGE difference between marking a packet for QoS and having any actual recording, logging, or any other information about a packet leave the ingress / marking device in any way. After that, my ISP should have zero input to what packets I ask them to carry. If I'm not sending said packets to their devices it's none of their business. Even if my payload is "illegal", well again, we don't allow the phone company to listen in to my calls in order to drop the ones that are making threats or playing music in the background.

          In summary, communications should be treated the same no matter the technology, and middle-men in the process should never dig deeper than needed to deliver, even if I am sending a post-card or plain-text packet.

          reply to this | link to this | view in chronology ]

  • identicon
    Yes and No, 22 Jun 2020 @ 1:37pm

    No DoH

    Currently DNS works so well and hasn't been replaced because it is fast efficient and has no central server. Secure DNS sure. Not so much to prevent ISP from sniffing what you browser, but to make sure no one can spoof the answers.

    However. HTTPS/SSL/TLS is a cryptological dumpster fire. The last thing we need something slow and buggy added on top of thing fo such a basic service as name resolution.

    So please no DoH. What's wrong with DNSEC to start with.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Jun 2020 @ 1:50pm

      Re: No DoH

      What's wrong with DNSEC to start with.

      For one thing, it's only signed—not encrypted. So, it provides no privacy.

      The signing does allow alternate transmission mechanisms. For example, Techdirt could (in principle) send me the signed DNS records of every site referenced by the page I'm viewing; avoiding DNS queries entirely should speed things up. Or, having the DoH server's certificate signed via DNSSEC instead of CA's would avoid a big part of the "dumpster fire". (DNSSEC records can be verified offline; they don't require DNS access.)

      reply to this | link to this | view in chronology ]

  • icon
    Ehud Gavron (profile), 22 Jun 2020 @ 2:53pm

    That's cute!

    DNS:
    Client --> One UDP packet
    Server --> One UDP packet

    Encrypted DNS:
    Client --> establish connection
    Server --> me too
    Client --> Send certificate
    Server --> me too
    [verification CPU processing time left out of this network exchange]
    Client --> request
    Server --> reply
    Either side --> teardown connection
    Other side --> Yeah, sure

    Next time you go to a webpage hit "View Source" (ctrl-U on firefox variants) and count the number of domain names. Now multiply that by the difference between 2 UDP packets of under 128Bytes and an entire encryption setup, dialogue, query, and teardown.

    Sure, encryption is great. Go write it on a piece of paper and hide it in your pocket and give it to your secret crush in the classroom. NOBODY WILL KNOW. Bandwidth is low, latency is high, jitter is through the roof, but OH THANK GOD NOBODY KNOWS.

    Or just freaking live with DNS.

    E

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Jun 2020 @ 3:21pm

      Re: That's cute!

      Now multiply that by the difference between 2 UDP packets of under 128Bytes and an entire encryption setup, dialogue, query, and teardown.

      All major browsers have adopted HTTP/2, which allows for keepalive-style communications with HTTP/2-compliant servers, even over TLS/SSL. Anyone implementing DoH will do do with an HTTP/2-compliant server (otherwise, they are morons). In that case, the setup and teardown steps that you cite should be no more than once per page, not once per individual domain name.

      Also, note that DNS clients do some amount of caching. So, many of the domain names seen on a page will not need to be looked up, because they were looked up recently.

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 22 Jun 2020 @ 3:48pm

        Re: Re: That's cute!

        In that case, the setup and teardown steps that you cite should be no more than once per page, not once per individual domain name.

        DoH tends to use the same server every time, so it would be a poor implementation to have even that many setup/teardown steps. There's little reason the connection can't remain open for hours at a time.

        HTTP/3 is set to be based on QUIC, which uses UDP with userspace congestion control. That should eliminate head-of-line blocking which could lead to latency spikes on packet loss.

        reply to this | link to this | view in chronology ]

        • identicon
          Yes and No, 22 Jun 2020 @ 5:20pm

          Re: Re: Re: That's cute!

          Wait, you are going to put the functions of TCP into into the app layer and use UDP. More dumb on top of dumb

          Did you learn nothing from a mess that made NFS for many years. Everyone had their own idea of re-transmit timings and rules.

          On top of all this SSL/TLS with Certs is just a mess. A bunch things started using SSL as an easy way to encrypt between servers. Seem like an easy thing to do, you didn't have to roll your own. You just wanted to keep things away from network sniffers.

          Then along comes your friendly IA department that can barely spell TLS and an out of the box networking scanner. The next thing you know you are having to buy Certs and figure out how to get app XYZ to use a supplied one rather than a simple self signed Cert. App XYZ already has ways of making sure it is talking to the right thing, it doesn't need/want a Cert, now you have to manage them. It's a PITA.

          Because of this you have apps now wanted to use stripped down set off libraries to do it the way SSH does. Which in my mind is better. There is no private key that allows a three letter agency to decide any past traffic. No Certs, and is sure is much less buggy that SSL.

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 22 Jun 2020 @ 6:34pm

            Re: Re: Re: Re: That's cute!

            Wait, you are going to put the functions of TCP into into the app layer and use UDP. More dumb on top of dumb

            QUIC wasn't invented just for fun. It solves real problems that cannot be solved with TCP, because of the aforementioned head-of-line blocking. Some of the inventors published a paper in 2017 (there's a video attached too). See Section 2, "Motivation: Why QUIC?" The authors explain why a new transport protocol (which is what QUIC is) has to be based on UDP: there are too many deployed firewalls and middleboxes that simply discard anything that isn't TCP or UDP.

            The use of UDP allows QUIC packets to traverse middleboxes. QUIC is an encrypted transport: packets are authenticated and encrypted, preventing modification and limiting ossification of the protocol by middleboxes.

            Efforts to reduce latency in the underlying transport mechanisms commonly run into the following fundamental limitations of the TLS/TCP ecosystem. ... even modifying TCP remains challenging due to its ossification by middleboxes. Deploying changes to TCP has reached a point of diminishing returns, where simple protocol changes are now expected to take upwards of a decade to see significant deployment.

            The middlebox issue is why nobody uses SCTP, which was designed for similar purposes as QUIC. An SCTP service simply will not be accessible to some significant fraction of users. QUIC was meant to be actually deployable. Using UDP as a substrate is otherwise functionally identical to using IP as a substrate. The authors emphasize that working in userspace (which you have conflated with "the app layer") aided deployment and optimization by allowing the use of better development tools—including finding a bug in an algorithm that had originally been implemented in TCP in kernelspace (Section 7.4).

            Because of this you have apps now wanted to use stripped down set off libraries to do it the way SSH does. Which in my mind is better. There is no private key that allows a three letter agency to decide any past traffic.

            What are you talking about? An SSH server has a private key. That's how public-key cryptography works. You seem to be thinking of forward secrecy—but that's a property of the key exchange, not a matter of having a private key or not. Up-to-date TLS clients and servers support forward-secure key exchanges these days; the current TLS 1.3 standard even removed all non-forward-secure exchanges.

            reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 22 Jun 2020 @ 10:10pm

            Re: Re: Re: Re: That's cute!

            Wait, you are going to put the functions of TCP into into the app layer and use UDP.

            It's a mistake to view "application layer" as a statement about where the code runs. By function, QUIC would be transport layer—although the IETF tends to reject layering as a concept (cf. RFC3439 §3 "Layering Considered Harmful").

            In a few years, we'll see whether it was dumb, but I see little reason to think it is. The inflexibility of TCP connections being treated as single streams in operating systems causes demonstrable problems with no better practical solutions proposed (let's ignore out-of-band/urgent data, which has been a disaster).

            Really, QUIC will be implemented by libraries. Probably cross-platform libraries, which might actually make it more consistent than TCP across operating systems (each of which has different TCP re-transmit timing and rules).

            SSH ... No Certs

            Good news, everyone! OpenSSH added certificate support in 2010.

            reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 23 Jun 2020 @ 9:40pm

          Re: Re: Re: That's cute!

          DoH tends to use the same server every time, so it would be a poor implementation to have even that many setup/teardown steps. There's little reason the connection can't remain open for hours at a time.

          Except TTCTTU. (Time To Check, Time To Use)

          A connection that is up for hours at a time is very susceptible to compromise. Remember that session key is retained by the server until the session terminates. During which time a well placed tap could get it. Hell, if the connection is up for hours, a warrant could be signed by a judge and served to the server op. Legit or not. (Or just take your general jackboot and break the doors down.)

          Any way it happens, once it does that session is no longer secure, and the client will have no idea the session was compromised. Have fun speaking out against tyranny and oppression then.

          Not only is keeping the session around a bad idea security wise, it also requires a crap ton of server resources to maintain. Imagine all of the devices that query a DNS server daily. Imagine all of the requests that they make a day. Now imagine them all trying to connect to the same server to make every single one of those requests at once. How many requests do you think the server will be able to handle before the server buckles under the pressure? Never mind that some DNS requests are spurious in nature, and that some are made just as a security precaution. How many of these requests do you think can be handled? The clients are not set up to cache these responses, and many that are only do so for a short limited time for non-secure use. Your Nintendo Switch will never cache those responses. (After all you might be a dirty pirate posing as Nintendo.) Nor will anything when it involves DRM. Google? Depends on how much they wanna lock it down this week. Apple? Not gonna happen. You'd need a secure enclave just to store the responses.

          All around that's a Bad Idea.

          HTTP/3 is set to be based on QUIC, which uses UDP with userspace congestion control. That should eliminate head-of-line blocking which could lead to latency spikes on packet loss.

          Oh great yet another protocol meant to break the existing network. Here's something for those freedom fighters out there trying to remain anonymous:

          QUIC includes a connection identifier which uniquely identifies the connection to the server regardless of source. This allows the connection to be re-established simply by sending a packet, which always contains this ID, as the original connection ID will still be valid even if the user's IP address changes.

          Hope that unique ID to the DNS server that remained active for hours tracking the device across multiple different public wifi networks doesn't unmask you.

          Seriously, not a good idea. Not for journalists, the oppressed, nor your casual internet user. If anything these designs would increase the risk of successful unique tracking by others, not decrease it.

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 24 Jun 2020 @ 9:15am

            Re: Re: Re: Re: That's cute!

            Remember that session key is retained by the server until the session terminates. During which time a well placed tap could get it.

            The word "tap" doesn't normally refer to something sitting inside the server, which is where this would have to be (to work as described). If it's inside the server, what would prevent a minutes-long connection from being broken? This is the weirdest criticism I've seen and requires some serious citations.

            Hope that unique ID to the DNS server that remained active for hours tracking the device across multiple different public wifi networks doesn't unmask you.

            That's a fair point, but there's so much more that can be used to track people. We already have long-lived AJAX connections. At the very least, one should be restarting one's browser when moving like this (all connection IDs would be lost) and clearing all cookies etc. Preferably, shut down and restart a TAILS virtual machine.

            reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 26 Jun 2020 @ 10:25pm

            Re: Re: Re: Re: That's cute!

            > QUIC includes a connection identifier which uniquely identifies the connection to the server regardless of source. This allows the connection to be re-established simply by sending a packet, which always contains this ID, as the original connection ID will still be valid even if the user's IP address changes.

            Hope that unique ID to the DNS server that remained active for hours tracking the device across multiple different public wifi networks doesn't unmask you.

            You are way off base with this. A QUIC connection ID is not a permanent identifier -- it's a random number used for one connection and then discarded. Furthermore, the same connection ID is never used when migrating across different networks. Actually, there is not just a single connection ID, but a set of them, for exactly this reason. You should read Privacy Implications of Connection Migration in the draft spec:

            Using a stable connection ID on multiple network paths allows a passive observer to correlate activity between those paths. An endpoint that moves between networks might not wish to have their activity correlated by any entity other than their peer, so different connection IDs are used when sending from different local addresses...

            An endpoint MUST NOT reuse a connection ID when sending from more than one local address, for example when initiating connection migration... Similarly, an endpoint MUST NOT reuse a connection ID when sending to more than one destination address.

            A client might wish to reduce linkability by employing a new connection ID and source UDP port when sending traffic after a period of inactivity.

            reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 22 Jun 2020 @ 5:52pm

        IMC papers

        All major browsers have adopted HTTP/2, which allows for keepalive-style communications with HTTP/2-compliant servers, even over TLS/SSL. Anyone implementing DoH will do do with an HTTP/2-compliant server (otherwise, they are morons). In that case, the setup and teardown steps that you cite should be no more than once per page, not once per individual domain name.

        That's right. You pay the TCP and TLS setup overhead once, and then that cost is amortized over many queries. There were a couple of papers on this topic in last year's Internet Measurement Conference, with empirical measurements. There is additional overhead in terms of bytes and packets, but the effect on query latency and page load times is small.

        An Empirical Study of the Cost of DNS-over-HTTPS

        When comparing UDP-based DNS with DoH, we see that the UDP transport systematically leads to fewer bytes and fewer packets exchanged, with the median DNS exchange consuming only 182 bytes bytes and 2 packets. A single DoH resolution in the median case on the other hand requires 5737 bytes and 27 packets to be sent for Cloudflare and 6941 bytes and 31 packets for Google. A single DoH exchange thus consumes more than 30 times as many bytes and roughly 15 times as many packets than in the UDP case. Persistent connections allow to amortize one-off overheads over many requests sent. In this case, the median Cloudflare resolution consumes 864 bytes in 8 packets, the median Google resolution 1203 bytes in 11 packets. While this is significantly smaller compared to the case of a non-persistent connection, DoH resolution still consumes roughly more than four times as many bytes and packets than UDP-based DNS does.

        Even though these results show that changing to DNS resolution via DoH leads to longer DNS resolution times, this does not necessarily translate into longer page load times. ... There is however little difference between page load time via legacy DNS or DNS-over-HTTPS: both resolution mechanisms achieve similar page load times.

        An End-to-End, Large-Scale Measurement of DNS-over-Encryption: How Far Have We Come?

        The reuse of connections has a great impact on the performance of DNS-over-Encryption. To amortize query latency, it is required that clients and servers should reuse connections when resources are sufficient. In current implementations, connection reuse is the default setting of popular client-side software and servers, with connection lifetime of tens of seconds. Under this lifetime, a study shows from passive traffic that connection reuse can be frequent (over 90% connection hit fraction). Therefore, we consider that connection reuse is the major scenario of DNS-over-Encryption queries, and take it as the main focus of our performance test.

        Finding 3.1: On average, query latency of encrypted DNS with reused connection is several milliseconds longer than traditional lookups. Connection reuse is required by the standard documents whenever possible. Our discussion in Section 4.1 also shows that connection reuse can be frequent for DNS-over-Encryption in practice. As shown in Figure 9, when connection is reused, encrypting DNS transactions brings a tolerable performance overhead on query time. Comparing the query latency of Cloudflare’s clear-text DNS, DoT and DoH, we are getting average/median performance overhead of 5ms/9ms (for DoT) and 8ms/6ms (for DoH) from our global clients.

        reply to this | link to this | view in chronology ]

  • identicon
    Sok Puppette, 22 Jun 2020 @ 6:17pm

    Sorry, no.

    There are two issues here: integrity and confidentiality (aka privacy). These systems are not the answer for either one.

    Integrity is best solved end-to-end using DNSSEC. It's absolutely stupid to try to do it using hop-by-hop cryptography; you're trusting every hop not to tamper with the data.

    ... and just encrypting DNS traffic doesn't solve confidentiality either. It doesn't even improve confidentiality in the large.

    1. The adversary model is incoherent. If your ISP is spying on your DNS traffic, and you deny that to the ISP, then the ISP can just switch to watching where your actual data go. Yes, that may be slightly more costly for them, since otherwise they probably would have done it in the first place. It doesn't follow that the costs imposed on them are enough to justify the switch. In fact, they probably are not.
    2. All the proposals encourage centralization, which means that when (not if) some resolver that a lot of people are trusting goes bad, the impact is huge. Instead of a relatively large number of relatively survivable events, you create a few massive catastrophes.
    3. What this is fundamentally trying to be is an anonymity system (I guess a PIR system). Anonymity systems are HARD. Much, much harder than point to point cryptography. There are a million correlation and fault induction attacks, and in the case of DNS there are a million players in the protocol as well. There's been absolutely zero analysis of how easy or hard these methods may be to de-anonymize using readily observable data. They seem to be being designed by people who don't even understand the basics, and think they're helping when they charge ahead blindly.

    ... not to mention that it's just psychotic to tunnel a nice simple cacheable protocol like DNS over a horrific tower of hacks like HTTP.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 22 Jun 2020 @ 9:46pm

      Re: Sorry, no.

      If your ISP is spying on your DNS traffic, and you deny that to the ISP, then the ISP can just switch to watching where your actual data go.

      You're assuming that DNS is only used to resolve a name for the purpose of opening a direct IP connection to it. While that's the dominant use (and the primary use for which browser-vendors are pushing it), it has the potential to benefit other uses. Things like encryption key lookups or references to alternate (e.g. onion-routed) service addresses.

      reply to this | link to this | view in chronology ]

  • identicon
    Smartassicus the Roman, 22 Jun 2020 @ 6:44pm

    Uummmmm

    You can set up encrypted DNS in about 5 minutes right now if you take the time to do it.

    reply to this | link to this | view in chronology ]

    • icon
      Ehud Gavron (profile), 23 Jun 2020 @ 2:04am

      Re: Uummmmm

      You can set up encrypted DNS in about 5 minutes right now if you take the time to do it.

      I can set up real DNS in 3 seconds. That's 100x (2 orders of magnitude better). Sorry, not a win.

      Wanna go for a better score?

      E

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 23 Jun 2020 @ 3:25am

        Re: Re: Uummmmm

        Who care if it takes 3 seconds or a few minutes when it is a task you do once when setting up a system, or deployable image.

        reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 23 Jun 2020 @ 7:44am

        Re: Re: Uummmmm

        "Consumers" do not set up DNS servers, pretty much ever. 5 minutes vs. 3 seconds doesn't mean a thing, even if the numbers are accurate—though I'm guessing DoH will eventually be a 3-second "apt install" away.

        Someone else posted runtime measurements showing "a tolerable performance overhead on query time". That's what matters, and is far from "100x". "Consumers" are going to get this automatically on a browser update, and get a bit of extra privacy without ever noticing the change or its performance impact.

        reply to this | link to this | view in chronology ]

  • icon
    Ehud Gavron (profile), 23 Jun 2020 @ 6:50am

    Who care [sic] if

    Everyone who doesn't want to waste their time because the proposed solution is 100x more time consuming.

    "Efficient" and "pro-consumer" and "ergonomic" say so also.

    If you come up with something that meets that criteria, do tell. Until then, asking "Who care[sic]" just means YOU don't care. But you're nobody, so whether YOU care or not is not relevant. The market cares. Consumers care.

    E

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 23 Jun 2020 @ 7:29am

      Re:

      There's a cost to everything, security and privacy included. That you dislike any flavor of DNS that isn't from 1983 is irrelevant. The world is moving past you.

      reply to this | link to this | view in chronology ]

      • icon
        Ehud Gavron (profile), 23 Jun 2020 @ 9:56am

        Re: Re:

        I'm sorry you don't understand the protocols and have a "feeling" that anything is about me or what I dislike or not.

        When you discuss protocol features it's not about "like", "dislike", and "the world is moving past you" but ... wait for it... protocol features and how they work.

        Thank you for your opinion on what you feel my opinion is. As expected, you're wrong. Just as those who think that encrypted DNS as currently implemented is a magic panacea. You might want to look that word up before you respond, anonymous POS.

        E

        reply to this | link to this | view in chronology ]

  • identicon
    K England, 23 Jun 2020 @ 8:16am

    DOH has a big problem

    Just wanted to mention that DOH causes every web browser to create a long-lived HTTPS connection to the DNS server. Web servers are designed for short-lived TCP connections. The creation of millions of long-lived TCP connections will cause DOH servers to fall over, as has recently happened with Mozilla's DOH roll-out.
    Experts are recommending DNS-over-TLS and complaining about many features of DOH. For these reasons, it is likely to fail even with Mozilla and Google behind it.

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt

The Tech Policy Greenhouse
is a special project by Techdirt,
with support from:

Essential Reading
Techdirt Insider Chat
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.