Permalink | Comments | Email This Story

]]>

- Chris McKinlay used his PhD-level math skills to improve his dating experience on OkCupid. Using 12 fake OkCupid accounts and some Python scripts, he scraped millions of responses from 20,000 women... and after 88 first dates with women he found with his own algorithms, he's in a serious relationship. [url]
- About 40 million people in the US are using online dating services like eHarmony, Match.com, etc. And if those dating site users are anything like their counterparts in China, people aren't as fussy about their dates as their profiles might imply -- based on a study of hundreds of thousands of dating interactions on baihe.com (which has 60 million registered users). [url]
- Alli Reed performed her own experiment on OkCupid, trying to determine how awful a woman's profile could be before men would stop messaging her. The one part of her fake profile that wasn't awful was the very attractive profile picture, so the answer should be somewhat obvious. [url]

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

- Vinay Deolalikar posted his "proof" that P!=NP a few years ago, but it didn't quite stand up to the scrutiny of some mathematicians -- and you, too, can dismiss an extraordinary proof by watching out for a few telltale signs. It's hard to refute
*everyone*who claims to have a P!=NP proof, but there's a roadmap for how to avoid wasting other mathematicians' valuable time. [url] - A. Garrett Lisi has a grand unifying theory of the universe, but maybe he should stick to surfing. Lisi's TEDtalk is amazingly devoid of physics, but the Large Hadron Collider may have the final say about whether "E8" provides any unique insights on the universe. [url]
- In 2012, Japanese mathematician Shinichi Mochizuki posted 500+ pages (on the internet!) that "might" prove the ABC Conjecture. Mochizuki refuses to discuss his proof, and so far, no one else has really been able to tell him he's wrong. [url]

Permalink | Comments | Email This Story

]]>

While this is nice, this is just one patent in that particular lawsuit, and Uniloc has dozens of other patents that it's using in other lawsuits. And Uniloc shows no signs of slowing down. Just the other day it filed 12 new lawsuits.Claim 1, then, is merely an improvement on a mathematical formula. Even when tied to computing, since floating-point numbers are a computerized numeric format, the conversion of floating-point numbers has applications across fields as diverse as science, math, communications, security, graphics, and games.Thus, a patent on Claim 1 would cover vast end uses, impeding the onward march of science.

Permalink | Comments | Email This Story

]]>

One of the key problems is that software patents are essentially patents on mathematical algorithms -- sets of instructions for carrying out a calculation. Since it has long been a principle that you can't patent mathematical formulae or laws of nature, there is a tension there: if software is just mathematics, why should you be able to patent it at all? New Scientist points to an interesting article in the April 2013 issue of Notices of the American Mathematical Society, in which David A. Edwards proposes a radical way of solving that conundrum (pdf):

In particular, he believes it should beAt present, only those things which are made by man are patentable. Thus, the courts have allowed new forms of bacteria which have been engineered to have useful properties using recombinant DNA techniques to be patented but would not allow such a bacterium to be patented if it were naturally occurring even if it were newly discovered. This is the basis for the nonpatentability of computer programs. They are algorithms, which are essentially mathematical formulas, which -- as everyone knows -- are "eternal" and hence discovered by man and not created by him. This argument which, to say the least, is philosophically controversial, leads to our present unfortunate policy. From an economic point of view, there is no rationale for distinguishing between discovery and invention, and we would advocate dropping entirely any subject matter restrictions whatsoever on what can be patented. One should be able to patent anything not previously known to man.

One of his arguments is that this would spur people to make more discoveries. But that presupposes mathematicians aren't trying to do that now for glory, peer esteem and tenure, but there's no evidence to suggest that. The same argument is sometimes made in support of software patents -- that they stimulate the production of more software. But that overlooks the fact that the computer industry thrived for decades before the introduction of software patents, and that companies like Microsoft grew into hugely profitable enterprises without them.

That, of course, is exactly what has happened since the introduction of software patents, leading to the following situation today:In a memo to his senior executives, Bill Gates wrote, "If people had understood how patents would be granted when most of today's ideas were invented, and had taken out patents, the industry would be at a complete standstill today." Mr. Gates worried that "some large company will patent some obvious thing" and use the patent to "take as much of our profits as they want."

In the smartphone industry alone, according to a Stanford University analysis, as much as $20 billion was spent on patent litigation and patent purchases in the last two years -- an amount equal to eight Mars rover missions. Last year, for the first time, spending by Apple and Google on patent lawsuits and unusually big-dollar patent purchases exceeded spending on research and development of new products, according to public filings.

Since patents only give control over the commercial applications of his or her discovery or invention to the patentee, granting patents on mathematical formulas, laws of nature, and natural phenomena would have no negative side effects on pure science.

In 2002, the Court of Appeals for the Federal Circuit dramatically limited the scope of the research exemption in Madey v. Duke University, 307 F.3d 1351, 1362 (Fed. Cir. 2002). The court did not reject the defense, but left only a "very narrow and strictly limited experimental use defense" for "amusement, to satisfy idle curiosity, or for strictly philosophical inquiry." The court also precludes the defense where, regardless of profit motive, the research was done "in furtherance of the alleged infringer's legitimate business." In the case of a research university like Duke University, the court held that the alleged use was in furtherance of its legitimate business, and thus the defense was inapplicable.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

Permalink | Comments | Email This Story

]]>

- On March 14, 1:59 pm, the Exploratorium in San Francisco (which invented Pi Day in 1988) will be featuring an art installation called "Pi In The Sky." Created by artist ISHKY with the help of a team of artists, designers, and scientists, "Pi In The Sky" will be visible in the sky at more than 10,000 feet over San Francisco. Five synchronized planes equipped with dot matrix technology will skywrite the first 314 numbers of pi, with each number measuring over a quarter-mile in height. [url]
- How "old" is pi? The ancient Babylonians were able to come up with a close approximation of pi (3.125) around 2000 B.C. [url]
- How many digits of pi are required to be able to measure the circumference of the observable universe, with an accuracy of less than the width of a hydrogen atom? The answer is... 39. [url]

Permalink | Comments | Email This Story

]]>

- Does pi contain every set of finite number sequences? The answer to that question may not be known, but the first trillion or so digits of pi appear to be statistically random -- with 0-9 appearing with even distributed frequency. [url]
- It's possible to calculate the nth digit of pi without calculating every previous digit. So the gazillionth digit of pi can be verified, if you really need to know it. [url]
- If you're thinking about coming up with a new way to calculate pi, you can check your work for the first several trillion digits. Beyond about 10 trillion digits, you're into record breaking territory, and you'll need to adopt some other strategies. [url]

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

I'm reminded of that, after seeing Dealbreaker's headline about how world famous mutual fund investor, Bill Gross, of PIMCO, has patented the methodology for his bond fund -- or, as Dealbreaker correctly points out, he "patented a way to count." Indeed, the patent in question, US Patent 8,306,892 is somewhat hideous, describing not much more than the concept of an algorithm that weights regions based on GDP. The key claim:It is conceded that one may not patent an idea. But in practical effect that would be the result if the formula for converting BCD numerals to pure binary numerals were patented in this case. The mathematical formula involved here has no substantial practical application except in connection with a digital computer, which means that if the judgment below is affirmed, the patent would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself.

It doesn't take a patent specialist to figure out that this is basically patenting a spreadsheet for weighting countries on a few different factors. It seems to be theA computer-implemented method of managing a fixed income financial index, the method comprising: storing in a computer memory a regional weight for each of a plurality of regions of the world, each of the regional weights based at least in part on a gross domestic product for the region; storing in a computer memory, for each of the plurality of regions, a category weight for each of a plurality of categories of fixed income financial instruments issued from the region; storing in a computer memory asset data for a universe of fixed income instruments representing each of the plurality of categories of instruments in each of the plurality of regions, the fixed income instruments comprising one or more of the following: (i) fixed income securities, (ii) fixed income derivatives, or (iii) fixed income forwards; programmatically allocating, via execution of instructions by one or more computer processors, one or more constituent instruments from the universe of fixed income instruments to each of the plurality of categories in each of the plurality of regions; programmatically determining a constituent weight for each of the constituents allocated to each of the plurality of categories in each of the plurality of regions; programmatically calculating a subindex for each of the plurality of categories in each of the plurality of regions, each subindex based at least in part on the allocated constituents and the respective constituent weights, wherein the constituent weights for a first subindex comprise market capitalization weights and the constituent weights for a second subindex comprise gross-domestic product weights; and programmatically transforming the subindices, the category weights, and the regional weights into a value for the financial index.

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Lawler insists the math doesn't add up because without that marketing push, the number of subscribers would be much lower. HBO claimed that Lawler's math was right. And it may be. For now. But that's really dangerous thinking.More importantly, it wouldn’t include the cost of sales, marketing, and support — and this is where HBO would really get screwed. Going direct to online customers by pitching HBO GO over-the-top would mean losing the support of its cable, satellite, and IPTV distributors. And since the Comcasts and the Time Warner Cables of the world are the top marketing channel for premium networks like HBO, it would be nearly impossible for HBO to make up for the loss of the cable provider’s marketing team or promotions.

Think about it: Every time someone signs up for cable or satellite service, one of the inevitable perks is a free six- or 12-month subscription to HBO. And those free subscriptions are rarely, if ever, cancelled once the trial period ends.

We've pointed out before that it's quite tempting for legacy players to think that they can wait out disruptive innovation. They talk about how the new products and services aren't good enough or don't make enough money to bother getting into that space. Often they'll directly talk about how the new services don't make the same amount of revenue as the old ones (or they'll make some crack about "dollars into dimes.") And, of course, they insist that

MG Siegler has a great post talking about this very concept as it relates to HBO, responding to Lawler (again) and his recent interview of an HBO exec during a panel at TechCrunch Disrupt. Once again, HBO insisted that Lawler was right and that "the math didn't make sense." But Siegler points out, correctly, that

That, in a nuthsell, is what most companies fail to do. It's why Clayton Christensen's book sells so well, even though very, very few companies have any idea how to do what Apple did and "eat its own." But the point is there. If you focus on "the math," you're going to miss the market and be way, way too late. Back to Siegler:Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation.

He's right. And the more you look at the economics of innovation, the easier it is to understand why innovation always beats math. It's because "the math" that people do is of a static world, for the most part. They use past performance and metrics built on a different market. They don't understand how quickly a new market grows, and how much larger its overall potential is. And that's because we have difficulty in mentally dealing with non-zero sum markets, preferring to think that it's a one-for-one switch. But, it's not. Innovation expands markets in new and unexpected ways, often quite rapidly (though also, deceptively slowly at first, because the growth is often in a tangential market that people don't even recognize).Moore's statement about HBO is correct. The math is not in favor of selling HBO access directly to consumers. But if we're just thinking about this from a pure product perspective, I don’t think anyone would disagree that this is what we all want. HBO is choosing not to build the service we will love,they're choosing the short-term money. The safe bet. The math.

But if they don’t diverge from this path, it will lead to their demise. Innovation always beats math, eventually. That, you can take to the bank.

So they come up with spreadsheets and "models" that try to predict when the math says it's time to switch. And all of that time

Permalink | Comments | Email This Story

]]>

I will admit that my initial reaction to this article was to scoff and think that it's ridiculous. Understanding basic algebra, to me, seems fundamental to understand a variety of other important things -- including some forms of logic and statistics. So, I wondered how dropping algebra as a requirement might make those already lacking fields even worse.California's two university systems, for instance, consider applications only from students who have taken three years of mathematics and in that way exclude many applicants who might excel in fields like art or history. Community college students face an equally prohibitive mathematics wall. A study of two-year schools found that fewer than a quarter of their entrants passed the algebra classes they were required to take.

"There are students taking these courses three, four, five times," says Barbara Bonham of Appalachian State University. While some ultimately pass, she adds, "many drop out."

Another dropout statistic should cause equal chagrin. Of all who embark on higher education, only 58 percent end up with bachelor's degrees. The main impediment to graduation: freshman math. The City University of New York, where I have taught since 1971, found that 57 percent of its students didn't pass its mandated algebra course. The depressing conclusion of a faculty report: "failing math at all levels affects retention more than any other academic factor." A national sample of transcripts found mathematics had twice as many F's and D's compared as other subjects.

However, Hacker's piece actually suggests something of a solution: potentially replacing algebra

I will admit to being unsure how such a class will workInstead of investing so much of our academic energy in a subject that blocks further attainment for much of our population, I propose that we start thinking about alternatives. Thus mathematics teachers at every level could create exciting courses in what I call "citizen statistics." This would not be a backdoor version of algebra, as in the Advanced Placement syllabus. Nor would it focus on equations used by scholars when they write for one another. Instead, it would familiarize students with the kinds of numbers that describe and delineate our personal and public lives.

It could, for example, teach students how the Consumer Price Index is computed, what is included and how each item in the index is weighted - and include discussion about which items should be included and what weights they should be given.

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Last Friday's Financial Times had some interesting numbers.

**Fact 1:**According to analysts, the New York Times*only*needs to convert 1 to 10 per cent of the online visitors in order for the model to pay off.**Fact 2:**NY Times chief executive Janet Robinson has stated that they only expect about 15 per cent of visitors to encounter the paywall, since visitors can read 20 articles per month for free.**Fact 3:**Full website access and the mobile app are bundled for $15 per month. For the iPad app + web you pay $20 per month. $35 for all three.**Fact 4:**One analyst argues that the NY Times could earn $66m per year if it converted just 1 per cent of the visitors. This would mean they go from paying nothing, to paying (at least) $195 a year.

There is no way these numbers add up. Consider fact 1 and fact 2. First of all *only* 1 per cent might actually not be all that easy, let alone 10 per cent. Secondly, the 1 per cent is misleading, as they'll actually have to convert 1 to 10 out of every 15 visitors to encounter the paywall. So they actually have to convert 6 to 66 (!) per cent.

Next, the pricing might be too high. $15 per month is a lot for consumers who are not used to pay for news online, especially since there's no additional value as Mike commented last week. I'm not saying nobody will pay, but dragging in the 6 to 66 per cent of the visitors will be challenging, to say the least.

I cannot imagine this paywall to be successful. They can probably kiss the $40m investment in the development goodbye.

Permalink | Comments | Email This Story

]]>

This fascinates me because I was definitely taught that first method as a kid, but what really gets me is that I ended up teaching myself the second method, because it seemed like a fun trick that made it easier to multiply larger numbers in my head (shocking news: I was a bit of a nerd). But once I had taught myself the latter method, I could never figure out why that wasn't more common. Apparently, I was just ahead of my time.The Way We Used To Multiply

The old way to multiply required a student to add the products of 36 x 4 and 36 x 2. The trick is to add that 0 at the end of the second product.How Kids Learn To Multiply Now

These days, students add four products to get the answer.

The other interesting thing that hit me was the article's explanation for why things have shifted:

So for all the times kids claim that they shouldn't need to learn mathematics because they'll never need aspects of it in real life, it's nice to see that the education system is actually adapting to make the"That's largely to reflect the different needs of society," he says. "No one ever in their real life anymore needs to -- and in most cases never does -- do the calculations themselves."

Computers do arithmetic for us, Devlin says, but making computers do the things we want them to do requires algebraic thinking. For instance, take a computer spreadsheet. The computer does all the calculations for you automatically. But you have to write the macros that tell it what calculations to do -- and that is algebraic thinking.

"You cannot become good at algebra without a mastery of arithmetic," Devlin says, "but arithmetic itself is no longer the ultimate goal." Thus the emphasis in teaching mathematics today is on getting people to be sophisticated, algebraic thinkers.

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>

Permalink | Comments | Email This Story

]]>