Beginning of the end for open web data APIs?

[Update: hacking the Google Search AJAX API — see below.]

[Update #2: Don Box is thinking along the same lines as I am.]

[Update #3: Rob Sayre points out that there is, in fact, a published browser-side JavaScript API underlying the AJAX widget.]

Over on O’Reilly Radar, Brady Forrest mentioned that Google is shutting down its SOAP-based search API. Another victory for REST over WS-*? Nope — Google doesn’t have a REST API to replace it. Instead, something much more important is happening, and it could be that REST, WS-*, and the whole of open web data and mash-ups all end up on the losing side.

It’s not about SOAP

Forget about the SOAP vs. REST debate for a second, since most of the world doesn’t care. Google’s search API let you send a search query to Google from your web site’s backend, get the results, then do anything you want with them: show them on your web page, mash them up with data from other sites, etc. The replacement, Google AJAX API, forces you to hand over part of your web page to Google so that Google can display the search box and show the results the way they want (with a few token user configuration options), just as people do with Google AdSense ads or YouTube videos. Other than screen scraping, like in the bad old days, there’s no way for you to process the search results programmatically — you just have to let Google display them as a black box (so to speak) somewhere on your page.

A precedent for widgets instead of APIs

An AJAX interface like this is a great thing for a lot of users, from bloggers to small web site operators, because it allows them to add search to their sites with a few lines of JavaScript and markup and no real coding at all; however, the gate has slammed shut and the data is once again locked away outside the reach of anyone who wanted to do anything else.

Of course, there are alternatives still available, such as the Yahoo! Search API (also available in REST), but how long will they last? Yahoo! has its own restructuring coming up, and if Nelson Minar’s suggestion (via Forrest) is right — that Google is killing their search API for business rather than technical reasons — this could set a huge precedent for other companies in the new web, many of whom look to Google as a model. Most web developers will probably prefer the AJAX widgets anyway because they’re so much less work, so by switching from open APIs to AJAX widgets, you keep more users happy and keep your data more proprietary. What’s an investor or manager not to like?

What next?

Data APIs are not going to disappear, of course. AJAX widgets don’t allow mash-ups, and some sites have user bases including many developers who rely on being able to combine data from different sources (think CraigsList). However, the fact that Google has decided that there’s no value playing in the space will matter a lot to a lot of people. If you care about open data, this would be a good time to start thinking of credible business cases for companies to (continue) offer(ing) it.

Update: Hacking the Google AJAX API (or, back to Web ’99)

The AJAX API is designed to allow interaction with JavaScript on the client browser, but not with the server; however, as Davanum Srinivas demonstrates, it’s possible to hack on the API to get programmatic access from the server backend. I’m not sure how this fits withThis violates Google’s terms of service, and obviously, they can make incompatible changes at any time to try to kill it, but at least there’s a back door for now. Thanks, Davanum.

Personally, I was planning to use the Yahoo (REST) search API for site search even before all this broke, because I didn’t want to waste time trying to figure out how to use SOAP in PHP. I’m glad now I didn’t waste any time on Google’s API, and I’ll just keep my fingers crossed that Yahoo’s API survives.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

28 Responses to Beginning of the end for open web data APIs?

  1. Pingback: bugfox blog » Blog Archive » The End of Web 2.0?

  2. Pingback: snellspace.com » Blog Archive » Balderdash

  3. rektide says:

    hahaha. the sky is falling, indeed chicken little.

    i think the very phrase buisness case elludes the growing sense of the network thats being built. google just took a step away from the network, just as other companies will wax and wane in this same decision to participate or not to participate in the network. these are decisions of individuals, and i’m confident that no sum of individuals will ever stop the networked access of information from being the most pertinent and defining characteristics of this era. the defining characteristic of the network is its unpredictability, its infinite recombinant forms, the shapes we can never think of. i wish the best of luck to all the fools who think they can continue to exist in stable static forms of their own choosing, not seeing, hearing or speaking with the rest of the network, not allowing themselves to be shaped by it. these are people seeking existance in a vaccuum.

    as it happens, the web, as a brutally client server architecture, is a particularly poor at creating the open information spaces that the current young bloods dream deeply and wildly about. so far the web has only been a reflection of the available open source computations available at any given time, and i see no reason to think that our capacity to wield information and the value of our network is determined by anything else besides this open computation.

    screen scraping is the ultiamte freedom. if fools continue pressing pre-formed buisness cases on the network, mashups will simply revert to elaborate grease monkey scripts that allow users to rewrite and re-edit the trash they’re being fed. screen scraping is pure, its the users action, your own manipulation of the content you’ve been handed, and entails no nasty redistribution clauses. screen scraping is the only thing decentralized about the entire so called “web”.

  4. Pingback: Labnotes » Rounded Corners - 78

  5. Pingback: tech decentral » links for 2006-12-19

  6. jm says:

    If this is accurate, then it will be interesting to see how Google itself reacts when sites that it indexes for its own search engine business begin to “hide” (make it difficult to get at) their data behind widgets and libraries.

  7. Pingback: Rektide - Voodoo Warez Productions « BC Government 2.0

  8. Ryan Baker says:

    I would agree to a small extent with rektide, that this is a rather dramatic reaction to a singular event. However, it’s saddening that it’s taken this long, or something of this nature to enlighten the community to a certain truth that has been there all along. The web does not guarantee openness, openness is merely the standard that has been adopted.

    In truth the desktop area is far more open because users have vastly more control over their desktops.

    It’s not exactly news to me, as I predicted about 6 months ago that interoperability in the web would start to become less pervasive (see http://ryan-technorabble.blogspot.com/2006/07/web-interoperability-and-identity.html). I doubt it’s a 180 degree turn, there’s still a lot of momentum to sustain the standard and a lot of industrious people committed to it. But I do think that it will become hard to ignore that interoperability is not simply something you can take for granted, and that openness is not a feature of the web, it is a product of the community.

  9. Pingback: tecosystems / links for 2006-12-20

  10. Pingback: Ymerce » Blog Archive » Wordt het een open of gesloten web?

  11. Pingback: tijs.org » Blog Archive » Google doesn’t like API’s anymore?

  12. Pingback: kartentisch » Schnelldurchlauf: Googles APIs, Georgia, Kunst und Kartografie

  13. len says:

    First they protect the resources. Then they protect the sources for the resources.

    Ever signed an employment agreement that makes anything you invent while in their employ the IP of the employer? No? Then you never worked for a professional company of any significance.

    It is not surprising they took this long to get around to this. Dope dealers get you hooked first. It is only surprising that you are so angst-ridden about that.

    “As the twig is bent…, “, David. The web changes nothing in the human equation except short cycle behaviors. The long cycle behaviors reemerge dominant as always. Over a longer time, you may see some subtle variations change features in certain environment but these will come as side effects, not direct effects. Beware pleioptropy.

    len

  14. Pingback: snellspace.com » Blog Archive » Putting the Web in Web Services

  15. Pingback: C. David Gammel, High Context Consulting » Blog Archive » Google Deprecates SOAP Search API

  16. David,

    it’s possible to programatically access the AJAX Search API. Please see here for proof (calling from plain ol’ Java):
    http://blogs.cocoondev.org/dims/archives/004722.html

    thanks,
    dims

  17. Robert Sayre says:

    Hi Dave. The paragraph under “It’s Not About SOAP” seems completely incorrect. Google does provide readymade widgets, but the public, documented API is rather extensive underneath.

    http://www.franklinmint.fm/blog/archives/000951.html

    Also, the server-side processing argument seems to miss the point. The canonical “mashup” is a web page that pulls in JS data from Google Maps and some other source. If you want people to use Google Search results on their web pages, the wrong answer is “here is the SOAP/XML-REST/HTTP/Whatever API”. That is for the (vocal) priesthood, not web authors. Your main point seems to be that it doesn’t allow bots. *shrug

  18. david says:

    Thanks for the Javascript API clarification, Robert. I haven’t had time to look in detail to see if/how Google is working around the same-site XMLHttpRequestObject restriction (invisible iframes?), but client-side mashups of the cute-overlay-over-Google-maps variety, while fun, have their limitations — eventually, for many genuinely-useful things, you’re going to have to do data crunching on the server side.

  19. Robert Sayre says:

    Well, I think the limitation is basically the same one Google has always had: “no bots”.

    Regarding the API traffic, it looks like a variation on JSON-P. Basically, you do this:

    —————————

    /* write a callback function */
    function myJSONCallback(aJSON) { alert(aJSON) }

    /* append a script element to the document */
    var myScript = document.createElement(“script”);
    myScript.src = “http://api.example.com/makeArray?elements=3&callback=myJSONCallback”;

    —————————

    The browser sends a GET to request the script src, and this can be any host. The body of the request looks like this:

    myJSONCallback([“”,””,””]);

    The contents of the parens are a JSON object literal.

  20. Pingback: Venture Beat Contributors » The Google API kerfuffle, and what it means for start-ups

  21. Pingback: Quoderat » Yahoo stands firm behind its search API

  22. Jon says:

    Our response to this is the EvilAPI and EvilRSS

  23. Pingback: Un peu de lecture 03 at Aurélien Pelletier’s Weblog

  24. Pingback: sockdrawer » Blog Archive » Always beta?

  25. Pingback: Ajaxian » The Business of Ajax - Google’s Ajax Search API

  26. Pingback: The Business of Ajax - Google’s Ajax Search API

  27. seo says:

    I’m really surprised there hasn’t been a bigger backlash against this. Where is the outrage?

  28. john beck says:

    Google doesn’t have a REST API to replace it. Instead, something more important is happening, and it could be that REST, WS-*, and the whole of open web data and mash-ups all end up on the losing side.

Comments are closed.