![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
[Mostly for the techies]
I was just reading this recent article in TechCrunch -- it's light on detail, but makes the likely-correct point that APIs are going to become ever more important in the industry in coming years. So I'm pondering: what should Querki be doing with them?
At the highest level, the notion's been around for a while -- I think Galen mentioned the idea of API integration a couple of years ago now -- but Querki's original architecture wasn't API friendly because the QL processing pipeline was too synchronous. Now that that's fixed, I don't think there's any reason we *can't* build API integration into Querki, allowing a Querki Space to call APIs of external services. I'm just not sure what it *means* yet, and my rule for Querki is that I don't build features until I have clear use cases.
(NB: I'm specifically talking about Querki Spaces calling external APIs here. Querki itself already exposes an extremely rich API -- the rewrite I've done over the past year effectively means that Querki is *entirely* API-based, and simply provides a standard Client for working with those APIs. They're nowhere near stable enough yet, and are completely lacking stuff like documentation, examples, validation suites and all those elements you need in order to make an API real, but under the hood everything is now driven by a straightforward JSON API that you can code to.)
Anyway: I'm looking for ideas. Querki by now makes it fairly easy to build Spaces for your data needs; my current Spaces include things like:
The question is, in what ways would API access make this better? What could you do more easily in this world, if you could, reasonably easily, tell Querki to go out and do *something* with a particular API from some other service?
None of this is going to happen soon, mind -- I have lots of more-critical irons in the fire at the moment. But I'd like to add some examples to Querki's Use Cases, to help understand where we should eventually be going with API access.
So the floor is open for brainstorming. Anybody got suggestions? Anything data-centric you've always wanted to build, that would work best if you could mix in specific outside data?
I was just reading this recent article in TechCrunch -- it's light on detail, but makes the likely-correct point that APIs are going to become ever more important in the industry in coming years. So I'm pondering: what should Querki be doing with them?
At the highest level, the notion's been around for a while -- I think Galen mentioned the idea of API integration a couple of years ago now -- but Querki's original architecture wasn't API friendly because the QL processing pipeline was too synchronous. Now that that's fixed, I don't think there's any reason we *can't* build API integration into Querki, allowing a Querki Space to call APIs of external services. I'm just not sure what it *means* yet, and my rule for Querki is that I don't build features until I have clear use cases.
(NB: I'm specifically talking about Querki Spaces calling external APIs here. Querki itself already exposes an extremely rich API -- the rewrite I've done over the past year effectively means that Querki is *entirely* API-based, and simply provides a standard Client for working with those APIs. They're nowhere near stable enough yet, and are completely lacking stuff like documentation, examples, validation suites and all those elements you need in order to make an API real, but under the hood everything is now driven by a straightforward JSON API that you can code to.)
Anyway: I'm looking for ideas. Querki by now makes it fairly easy to build Spaces for your data needs; my current Spaces include things like:
- Kate's Gallery of Cross-Stitch Projects
- LARP development and management
- The public Period Games Homepage, with links to a zillion pages on the topic, organized by Game, Source, Topic and so on
- The new (still in progress) Carolingian Cooks Guild database of recipes
- My personal Filk Songbook
- House Lochleven's Inventory Management Space
The question is, in what ways would API access make this better? What could you do more easily in this world, if you could, reasonably easily, tell Querki to go out and do *something* with a particular API from some other service?
None of this is going to happen soon, mind -- I have lots of more-critical irons in the fire at the moment. But I'd like to add some examples to Querki's Use Cases, to help understand where we should eventually be going with API access.
So the floor is open for brainstorming. Anybody got suggestions? Anything data-centric you've always wanted to build, that would work best if you could mix in specific outside data?
(no subject)
Date: 2015-10-13 01:52 pm (UTC)But do bear in mind that anytime one product/service has the capability to call any other product surface, the first p/s has the ability to be a launch platform against any "attackable surface" of the second.
That is to say - it might be useful, as part of that external API interface, to set up some sort of throttling or mediation.
For example, at least as of a few years ago - Twitter has a public search API that you may call. It had per-IP-Address limits on the number of calls per hour, which it enforced, however it only gave vague guidelines as to the maximum number of calls per hour that could be made. It's private interface (which required a unique encryption key which one could purchase) was virtually unlimited.
If one Q-Space calls Twitter 50 times an hour - no big deal. Ramp that up to 8 Q-Spaces calling Twitter 50 times an hour, and everyone is blocked for an unspecified period of time.
Beyond that, I've obviously implied that some APIs (like Google Maps or Twitter) allow calls when a secure credential is used. You'll want to be sure that you can handle, manage and not-unnecessarily cache those credentials, as well as guard against theft.
(no subject)
Date: 2015-10-13 02:05 pm (UTC)Mmm -- good point. I hadn't thought about that cumulative effect, but you're certainly correct. That's an interesting tragedy-of-the-commons problem; I'll have to ponder how best to handle it.
You'll want to be sure that you can handle, manage and not-unnecessarily cache those credentials, as well as guard against theft.
Oh, absolutely, and at this point I have no idea how that's likely to work. The less Querki has to hold credentials on the server, the better, but we'll see what's actually possible.
But first I'm going to worry about what folks might want, and then look into how it might work. Indeed, that's much of the motivation for this brainstorming: "API" is such a gigantic and vague concept that I can't *do* anything with it. If we can identify some relatively specific APIs that people think would be really useful, *then* I can really think about how that works technically, including what might be involved in terms of credentials, and think about how it will generalize...
(no subject)
Date: 2015-10-13 02:09 pm (UTC)In my heart, I'm trying to imagine the possibilities of a mashup between Querki and e-commerce. The ability to dynamically manage a space, combined with the ability to buy and sell, and perhaps leavened with access to things like Facebook's Disqus - cool.
(no subject)
Date: 2015-10-13 02:21 pm (UTC)(no subject)
Date: 2015-10-13 02:29 pm (UTC)You'll want to let the user lead you to services and needs that he or she has, not just provide things that you anticipate. WordPress certainly has that right - a platform for writing services, where the services are themselves bought/sold/rented/traded.
You can't do "anything" there, but you can do a lot.
The trick is to design for "the next cool service" that isn't even invented yet.
(Can WordPress call into Querki, as an example. When WordPress started, there was no Querki.)
(no subject)
Date: 2015-10-13 02:39 pm (UTC)So basically, I'm trying to get even a faint sense of where we might start with this. We can't do everything, and initially we'll probably only be able to do a little, so the question is, what would be the most useful subset?
(no subject)
Date: 2015-10-16 07:20 pm (UTC)Pushing data - a writing portfolio site using LTI/xAPI/Catalytics APIs to push to a learning object repository or analytics tool.
IFTTT and Zapier.
Aggregation and Syndication. Add something to livejournal, have it archived to a Querki Space and syndicated to Google+, Wordpress.com, Twitter, whatnot. This is probably not something Querki should be doing, and leads to questions about scheduling tasks.
(no subject)
Date: 2015-10-16 07:27 pm (UTC)Yeah -- separately from this conversation, Aaron mentioned the idea of having a Space that contains the stuff he's bought from eBay, and being able to import the full info about a new acquisition simply by giving the lot number in Querki.
Lots of good ideas here, and I should have thought about IFTTT myself. Hadn't come across Zapier -- same basic idea, it looks like?
The plan is actually for syndication to be built into Querki -- indeed, that feature has been in the plans from the very beginning. I haven't gotten around to building it yet, but the idea of the What's New feature is that you can specify a Model within your Space, and all new Instances of the Model get formatted into RSS entries. I believe it'll be quite powerful when we get that. (You're making a broader point than that -- just mentioning that that one is specifically on the to do list.)
Good feedback for me to chew on. Thanks!
(no subject)
Date: 2015-10-16 07:50 pm (UTC)RSS is one of my principle and preferred ways of moving data into my consumption stream, so I'm glad to hear it is in the works.
(no subject)
Date: 2015-10-16 07:55 pm (UTC)Mind, I may wind up with special modules to put these notifications directly into, eg, the Facebook stream as well; I may have little choice, in order to get the desired behavior there. But first things first -- RSS is the *right* way to do it, so it ought to come first.