Aphro

I've been captivated by this topic and am currently in the process of building a Ruby gem to help consume HATEOAS compliant services.

I've called it Aphro for want of a better name and because I think HATEOAS isn't a very nice name.

One thing I've been finding in my thought experiment is that the hypermedia constraint (better name in my opinion than HATEOAS), is great for discoverability and that with that discovery process comes an obvious performance cost.

API consumers want concise API calls and wouldn't expect to browse every time from the root to find the URI and required parameters.

Now if you re-read that and imagine a human interacting with a web browser you have all these metaphors that you can just carry across into the API consuming world.

"An API consumer doesn't want to make multiple requests just to find the required fields for a POST to a resource that probably hasn't changed it's URL" == "A user doesn't want to browse from the Google homepage to go to their Gmail to compose an email"

They can just save a link: https://mail.google.com/mail/#compose

Not only that, their browser could cache the page.

Well your API consumer can do the exact same thing. What's a good way for knowing the required fields for a POST to a resource? Save the webpage with the form on.

More traditional APIs with their pre-specified URLs just don't have the discoverability part. In caching the pages you lose a guarantee that you have the correct form fields, and would need to get an updated version if necessary (E-tags and HTTP statuses can help here too).

I'm working on being able to cache API calls (read save webpages), so that you can have API interactions that do the discovery part, useful for initial development, and monitoring jobs to check your producer still works as you'd expect, but then also have simpler closer to traditional API calls that don't have to do the discovery part.

So once the cached calls are saved what would it mean for your client? You could do something like this:

#Discovery section and used in monitoring jobs
twitter = Aphro.client 'api.twitter.com'
twitter.sign_in username, password
twitter.tweet 'new message'
twitter.cache_directory= 'tmp/cached_twitter_api/'

tweet_api = twitter.cache_page :tweet
twitter.save_session :twitter

tweet_api.tweet

#_____New session - day-to-day client code
twitter = Aphro.open_session :twitter
twitter.tweet 'my tweet using the saved session'

What else could you do?

I'm also working on auto-generation of consuming client code based on browser interactions. I'm interested in using the Browser as documentation and as the API. It could also be used as a way to dynamically generate client code. I think Selenium IDE might be positioned well as a means of recording interactions and generating client code.

Do we really have to use bulky XHTML?

I think with content-negotiation and a clear universal standard way of expressing forms in JSON then there's no reason that we can't have all this goodness with JSON too. I've not come across any clear standards just yet, and so I'm beginning to think that just writing good clients to consume forms and links might be the answer.

In order for the idea to really take off the tools need to be in place and it needs to just be easy to use.

There either needs to be an easy way to output forms in a standard way in JSON, or a really good client to simplify dealing with forms from say javascript.

My main reason for preferring JSON over XML is that it is so much easier to read as a human being. The fact that there are less characters and that it is more efficient isn't my primary motivation as a Rubyist. It's that it's easier for me the coder to read.

Perhaps if I worked at Google where the cost of running the infrastructure and servers is much higher than the cost of the developers then it might make sense to consider optimising for the machine. But the main reason Ruby is useful to businesses is because it can reduce the cost of development by simplifying things for the human. And a lot of the time development cost is a concern for businesses.

So parsing XHTML as an overhead for the machine? I don't think that's so bad in a lot of circumstances. What do you get for the trade-off?

The app is the API which _is the documentation which is the basis for the monitoring jobs that your API consumers run which is the client code that your consumers run.

Still I think a nice JSON output format (with forms) could help win people over, even if I do find web pages easier to read than JSON.