jducoeur: (Default)

Signal boost: the Republicans are once again trying to destroy Net Neutrality, with the new FCC Chair making the usual disingenuous BS arguments about it. And this time, they've made it ridiculously difficult to actually comment on it.

Fortunately, Jon Oliver and Last Week Tonight have jumped in to make life easier. If you go to GoFCCYourself.com, it cuts through most of the hoops -- just look for the "+ Express" link on the right-hand side, click on that, and you can enter your commentary.

This is important stuff: the big ISPs have shown themselves to be pretty untrustworthy, and willing to take undue advantage of their position. We need to stand up for Net Neutrality in force, immediately, if we're to have any hope of keeping it...

MaidSafe

Jul. 28th, 2014 02:09 pm
jducoeur: (Default)
Fascinating article here on TechCrunch, about a longtime startup called MaidSafe which is starting to stick its head up after *many* years in stealth mode.

Summary: they're attempting to essentially rework the upper levels of the Internet/Web stack, replacing them with a general architecture for peer-to-peer applications. There's no such concept as a "server" in this architecture -- all apps are decentralized, with redundant data, encrypted communications, and a BitCoin-style currency native to the network.

Frankly, it's a remarkable shoot-for-the-moon longshot -- one of the relatively rare startups that is *much* more ambitious than Querki. I would give them longish odds of success -- they're quite explicitly disrupting everyone's established business plans -- but their architecture sounds about right. (Frankly, it's along the lines of my idle "how *should* applications work?" musings.) It's very exciting, in a very geeky kind of way, and deserves to be taken seriously.

Worth keeping an eye on, especially for me: if this starts to go anywhere, I'm going to need to figure out how Querki would work in this model. I'm fairly sure it could be made to do so -- I've had the question "How would a decentralized version of Querki work?" in the back of my mind from the beginning -- but it would require enormous changes, both technical and business...
jducoeur: (Default)
Today's big deal seems to be a plethora of stories about new AI techniques to be applied to search. There are some good points there, about notions like using facial recognition to better understand photographs, or applying natural language techniques so that the search engines can understand real language instead of requiring "keywordese".

That said, there's still a lot of room for improvements on pure brute-force techniques, if you accept the notion of tracking previous searches. A simple case is progressive search refinement -- essentially playing Blind Man's Bluff with the search results, where you could say "cooler" and "warmer" as you wander through pages, actively triangulating the more and more relevant-looking pages and downgrading the groupings that seem to have less relevance. Another is remembering my previous searches *and* feeding them into the PageRank algorithms, so that pages I previously found useful would increase the network weight of those and related pages. (It's possible that this latter is already being done, but I have no evidence of it.)

Another subtle but serious improvement would be to make the Google Toolbar smarter about paying attention to how I *use* the search results. A page that I click on would be weighted higher than one I didn't. The *last* one I clicked on would be weighted higher than the rest, on the theory that this is typically the one that provided the right answer. Also, pages that I left open for a real period of time would have their weights increased over ones that I closed quickly. This would essentially build PageRank ratings automatically, working around the fact that most people aren't going to go to the effort of clicking on the "good" and "bad" icons.

All of these ideas would be controversial, to be sure -- the privacy implications are quite real, especially in light of last week's AOL search-history debacle. Still, this is an area where there is at least a genuine economic tradeoff of privacy for utility, so the potential privacy loss has some value. (As opposed to many modern privacy intrusions, which have no value at all to the person intruded upon.) I don't take it for granted that everyone would be repulsed by the potential privacy loss -- heck, I'm not even sure which way I'd come down on it. If a company made a real effort to improve the privacy of the history recording (say, by storing URLs as encoded hashes rather as plaintext, so that my search history could not simply be read out later), I could easily see myself accepting the risk...
jducoeur: (Default)
Today's big deal seems to be a plethora of stories about new AI techniques to be applied to search. There are some good points there, about notions like using facial recognition to better understand photographs, or applying natural language techniques so that the search engines can understand real language instead of requiring "keywordese".

That said, there's still a lot of room for improvements on pure brute-force techniques, if you accept the notion of tracking previous searches. A simple case is progressive search refinement -- essentially playing Blind Man's Bluff with the search results, where you could say "cooler" and "warmer" as you wander through pages, actively triangulating the more and more relevant-looking pages and downgrading the groupings that seem to have less relevance. Another is remembering my previous searches *and* feeding them into the PageRank algorithms, so that pages I previously found useful would increase the network weight of those and related pages. (It's possible that this latter is already being done, but I have no evidence of it.)

Another subtle but serious improvement would be to make the Google Toolbar smarter about paying attention to how I *use* the search results. A page that I click on would be weighted higher than one I didn't. The *last* one I clicked on would be weighted higher than the rest, on the theory that this is typically the one that provided the right answer. Also, pages that I left open for a real period of time would have their weights increased over ones that I closed quickly. This would essentially build PageRank ratings automatically, working around the fact that most people aren't going to go to the effort of clicking on the "good" and "bad" icons.

All of these ideas would be controversial, to be sure -- the privacy implications are quite real, especially in light of last week's AOL search-history debacle. Still, this is an area where there is at least a genuine economic tradeoff of privacy for utility, so the potential privacy loss has some value. (As opposed to many modern privacy intrusions, which have no value at all to the person intruded upon.) I don't take it for granted that everyone would be repulsed by the potential privacy loss -- heck, I'm not even sure which way I'd come down on it. If a company made a real effort to improve the privacy of the history recording (say, by storing URLs as encoded hashes rather as plaintext, so that my search history could not simply be read out later), I could easily see myself accepting the risk...

Profile

jducoeur: (Default)
jducoeur

June 2017

S M T W T F S
     123
456 7 8 910
11121314151617
18 192021222324
2526 27282930 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags