Mobile web app or web site?

If you’re creating an HTML5 mobile web application, this is the minimum you need to give your users what Google calls a “homescreen launch experience” on an iOS or Android mobile device:

<!doctype html>
<html>
 <head>
  <title>Awesome app</title>
  <meta name="viewport" content="width=device-width">
  <meta name="mobile-web-app-capable" content="yes">
  <link rel="shortcut icon" sizes="196x196" href="/icon.png">
 </head>
 <body></body>
</html>

The “shortcut icon” link element is pretty straight-forward: it simply provides a pretty, app-style icon to save to the iOS homescreen or Android launcher. The “mobile-web-app-capable” meta element, on the other hand, strikes right to the core of what you’re doing: are you creating a web app, or a web site?

I’m a web app

Here’s what Chrome for Android gives me if I save a web app to the homescreen (launcher), then tap on that icon:

  • The app gets its own entry in the task switcher.
  • All the browser chrome is hidden, and there’s no way to add a new tab, create a bookmark, change options, etc.
  • If the user chooses a link outside my domain, Chrome will animate a normal browser launching, to warn me that I’m outside the app.

I’m a web site

If I don’t include “mobile-web-app-capable,” then Chrome for Android will still let me create an icon on the homescreen, but when I tap on it, I’ll see my regular browser launch, with all its chrome, tabs, bookmarking, etc.

So which am I?

A web app is a loner. It has several advantages over a native mobile app (cross-OS/-device compatibility, avoiding app-store bottlenecks and censorship, more-flexible security model, linkable, etc.) and several disadvantages (can’t handle an Android Intent, weaker notification support), but its plan is clearly to stand on its own, like a native mobile app. A web app is designed around actions: with a clever combination of HTML5, CSS, JavaScript, and newer stuff like AppCache and LocalStore, you could create (e.g.) a Twitter web app that was very similar to the Twitter native app (important caveats). The web app designer doesn’t want her app open in one browser tab with a different site open in another, while the user switches back and forth. Online video games, interactive edit/design tools, and anything that is supposed to feel a bit like a MacOS or Windows program installed from a CD-ROM 15 years ago is likely an app.

A web site is a social creature. It’s orthogonal to a native mobile app, because it inhabits a different universe. A web site is RESTful, designed around pages (more generally, “resources”), most of which contain either information a user wants to see with links for navigating to related information, or a means to create/change/delete that information (there’s also typically a search feature). Unlike web apps, web sites embrace the browser chrome, which provides an important part of the user experience: because the browser allows a user to search for text within a page or bookmark any page, the web site itself doesn’t have to provide that support; similarly, the browser lets a user open multiple views into the same site using different tabs by default (so that a user can view two pages at the same time), while an app treated as an app has to have that support built in. Wikipedia, blogs, tutorials, newspapers and magazines, most online discussion forums, and my own OurAirports are all examples of things that are most naturally web sites.

Last thoughts

I’m starting the process of making OurAirports responsive and otherwise mobile-friendly, and one of the first things I tried was adding “mobile-web-app-capable” and then saving the icon to my Android desktop. It worked exactly as advertised — it was cool having the site open up as if it were a phone app, appear in the task switcher, etc., but it took me only minutes to realise that I’d crippled it: a user launching OurAirports that way could no longer bookmark a specific airport page to come back to it later; could no longer open two airports in different tabs; and could no longer search for text in a long comment stream. I removed it and let the browser chrome come back, because really, that browser chrome is a lot of what makes a web site useful.

So when you’re designing your next web thingy, ask yourself a few questions:

  • Is it valuable for users to be able to search for text within a page, bookmark pages, and open multiple tabs (site)?
  • Is it mainly about manipulating an external resource, such as a picture or diagram (app)?
  • Does it have lots of pages with relatively stable URLs (site)?
  • Can it potentially provide a reasonably-complete offline experience using only a small number of files containing markup, code, and data (app)?
  • Is it designed as part of a bigger web, linking out and receiving inbound links (site)?
  • Would browser chrome distract rather than help the user (app)?
  • Is it something that Google should index and make searchable (site)?

Over the past few years, I think, too many people have been taking things that are more-naturally sites (e.g. newspapers) and trying to force them to behave like apps by forcing in AJAX, hashbang URL syntax, etc.: the result is typically impressive and flashy, but a crappy user experience. On the other hand, the greatest advance in the Web over the past ~10 years has been its growing ability to support activities like creating spreadsheets or editing photos that really don’t fit the REST/web-site model. So by all means, if you have an app, let it be an app, but don’t try to force everything into app world; REST is still the web’s heartbeat.

Posted in Design, Mobile, REST | Leave a comment

How to install a usable Emacs in Android (Feb 2014)

Update: these instructions no longer work for Android L (Lollipop). For a first pass at installing Gnu Emacs in Lollipop, see this update.

Emacs logo. Since I mentioned in a review that I’m using Emacs on my Android Nexus 7 tablet, I’ve received several requests for information about how I set it up. There is a prepackaged Emacs app in the Google Play store, but it does not work as distributed, because it bundles a broken terminal application. The Emacs binary itself is fine, though, and with a bit of manual work, you can install it to run inside a different terminal application (I use Terminal IDE below).

Note: Emacs needs a full-featured PC-style keyboard. I usually run Emacs on my tablet with an external Bluetooth keyboard; if you want to use a soft keyboard, consider installing the Hacker’s Keyboard, which has all of the modifier keys Emacs expects.

Request: Would anyone be willing to rebundle Emacs in an easily-installable form for Terminal IDE, and make this blog posting obsolete?

Installing Emacs for Terminal IDE

  1. Install the Terminal IDE app on your Android device.

  2. In your Android browser, go to http://emacs.zielm.com/data/ and download the files emacs.lzma, etc.tlzma, and lisp.tlzma, then copy them to your Terminal IDE home directory.

  3. Launch a shell in Terminal IDE and run the following commands:

    $ unlzma emacs.lzma
    $ chmod 755 emacs
  4. At this point, you should already be able to run emacs by typing

    $ ./emacs

    (though it will fail because it can’t find its etc directory). If you want, just put it somewhere on your executable path, like you would in any Linux installation.

  5. Create a directory /sdcard/emacs/ (or make it somewhere different, if you’re willing to set environment variables to tell Emacs to look somewhere else).

  6. Copy the downloaded files etc.tlzma and lisp.tlzma to /sdcard/emacs/etc.tar.lzma and /sdcard/emacs/lisp.tar.tzlma

  7. Change to the /sdcard/emacs/ directory and run the following commands:

    $ unlzma etc.tar.lzma
    $ tar xvf etc.tar
    $ unlzma lisp.tar.lzma
    $ tar xvf lisp.tar
  8. Test that emacs starts up OK now (run from wherever you installed the binary):

    $ ~/emacs
  9. If all is well, you can optionally delete the tar files to save space:

    $ rm /sdcard/emacs/etc.tar /sdcard/emacs/lisp.tar
  10. Enjoy Emacs in Android! You might consider doing an C-u 0 M-x byte-recompile-directory on /sdcard/emacs/lisp/ (and any other lisp directories) to make sure you’re up to date.

Posted in General | 30 Comments

Why did I never get into Evernote?

I just noticed that the Evernote app updated itself on my phone, and that got me thinking about how I just don’t use it much. I made a serious effort a year or two ago, in the middle of my last big project, dutifully taking pictures of white boards, making notes during meetings, TODO checklists, etc.; however, I almost never went back and reread the notes, reviewed the snapshots, or worked through the TODO lists, and somehow, the project still chugged along just fine.

The surprising part (for me) is that Evernote is exactly what I would have designed if you’d asked me to design a note-taking/idea-organising app. From my POV, the app did everything right:

  • Search works
  • You can group notes by subject or by tags
  • You can share notes
  • The smartphone app works offline and syncs when you’re back online
  • It organises media like pictures the same way it organizes text notes
  • The UI is simple and uncluttered (at least, it was back when I was using it)

Maybe the problem isn’t Evernote, but the idea of taking notes at meetings. I have piles of old engineering notebooks from past consulting gigs, and I doubt I ever went back and reviewed 0.5% of them after I wrote them.

I guess this is a good lesson for information architecture and tech design. I started with a process that didn’t work for me (paper-based note taking) and assumed that if I added some tech pixie dust to it — web apps, search, tags, smartphone app, etc. — it would suddenly change and become functional. That didn’t happen, because it almost never happens, and it’s not Evernote’s fault.

Posted in General | 5 Comments

Can open data route around damage?

Today, the US government’s data.gov temporarily went dark, and along with it, what is likely the world’s most important collection of open data sets:

The US government's data.gov home page on 1 October 2013.

The US government’s data.gov home page on 1 October 2013.

You are welcome to use this as a chance to rail against the juvenile hijinks in the US Congress, but I think there’s a far more important lesson: if you depend on any centralised data source, even one run by the world’s richest and most-powerful government, it can fail and leave you cut off.

Nuclear bombs and censorship

There is a proverbial story that ARPAnet, which later grew into the Internet, was designed to route around failed nodes so that it could keep functioning after a nuclear attack. Even if that story is not strictly true (the design had more to do with the unreliable networking hardware of the time), the actual networking layers of the Internet are highly failure-tolerant.

Information-freedom activist John Gilmore took that ARPAnet creation myth a step further, and argued that

The Net interprets censorship as damage and routes around it.

Just as the Internet (as a network) can route around damage, “The Net” (as a culture) can route around censorship using Internet-the-network as a tool. History has proven Mr. Gilmore right: the entertainment industry, for example, has entirely failed to control and restrict the distribution of movies and music online, and the US government — which could reduce dozens of other countries to ash with the push of a button — could do nothing to prevent the spread of the (unauthorized) WikiLeaks data.

Replication, not routing

The Internet consists of a large collection of specifications and standards that define and enable its ability to route around damage; there’s no similar set of standards for getting information around censorship barriers (whether related to intellectual property or restriction of basic human rights). So how does it work? Why can’t the music industry, for example, take a song offline once people have started sharing it? How does “The Net” route so-called “pirated” content around huge, angry corporations spending millions of dollars hiring lawyers and lobbying legislators?

The trick with content seems not to be routing, but replication. To survive online, a piece of information simply has to be copied faster than its opponents can take it offline. Because it’s possible to make perfect, lossless copies of digital content, it becomes irrelevant whether a copy is first generation or 10th generation. For example, if five people make copies of content, then five more people make copies of each of those copies, etc., by the 10th generation you have 510 — or nearly 10 million — perfect copies spread out around the world, and with extremely-popular content, that process can take place in minutes.

Could we rebuild data.gov?

It’s likely that most Americans won’t suffer any real harm from today’s shutdown of data.gov: open data is still in its infancy. However, if we in the open-data community realize our hopes and succeed at making open data a critical part of how the world works, then the next shutdown could be far more harmful. Companies that rely on open data might have to close their doors and furlough employees; emergency responders in the field might have trouble helping victims of a flood or earthquake; maps or navigation systems might stop working; and so on. The more-successful open data becomes, the higher the cost of having it fail.

It would not be a complete disaster, however. A lot of the open data on data.gov exists in copies elsewhere, and if the site were to disappear, we could probably find copies of individual datasets on hard drives scattered around the world, and reproduce most of the data that was on it on 30 September. It would take time, and we wouldn’t know if the data was corrupt or fraudulent, but in most cases, it would probably be OK. As the world moved further and further beyond 30 September 2013, we’d also have to figure out how to get new data from the departments, offices, and organisations who had previously centralised their datasets in data.gov.

Learning from the pirates …

How can we make this recovery process easier? Let’s imitate the people who have already solved this problem: the so-called content “pirates.” We expect centralized open-data sites like data.gov to be available all of the time; the pirates expect their sources to vanish at any moment. We expect data providers to help and encourage us to use their data; the pirates expect legal action trying to shut them down. We get funding; they get fines or even sometimes go to jail. Yet they flourish, while we’re vulnerable to any government’s or organisations internal financial squabbling.

The answer is to copy, copy, copy, and copy. Make copies of all the open data you can find, share your copies with as many people as you can, and keep the copies somewhere safe, just as you would with MP3s of your favourite artist. Open-data sites that discourage bulk downloading need to rethink their priorities, but if they don’t, find a way around any barriers that they throw up. We need 1,000 sites providing the data.gov data, spread around the world, some publicly-funded, and some private. In a sense, a litigious recording company and a government-funded open-data site present exactly the same risks to their users, and we have to learn not to trust the availability of any single site.

… but sailing under true colours

But still, we’re not pirates. Unlike content piracy, open-data sharing can stay out in the daylight. Ministers and heads of state support us, international organisations and foundations fund us, and the media praise us. That means that we have the opportunity to get together and come up with real standards or specs for keeping open data available, just as the ARPANet founders did for network resiliency. These processes are hard, they will take time, and most ideas will go nowhere, but eventually, we could come up with something as useful as the collection of the Internet standards and less-formal, ad-hoc that allow you to get to this blog even around a broken router.

Working in the open also allows us to address issues of trust that are difficult to deal with in the piracy world. If you download an unauthorised copy of Microsoft Word, for example, how do you know that it’s authentic? Is it going to introduce malicious software onto your computer? If you download an unauthorized movie Disney movie, how do you know that it won’t suddenly flash a Goatse on the screen at minute 51?

In the open, we can talk about how to sign digital content and build a web of trust, so that you can rely on a US government dataset even if you loaded it from a Russian web site. We can talk about standardizing how open-data sites notify other systems about new or updated datasets (e.g. using RSS or Atom), so that sites can easily and automatically mirror one-another. And we can talk about discarding — in our field — broken concepts like the Creative Commons attribution licenses, which actually discourage sharing and using open data.

If we get this stuff right, we’ll be ready the next time data.gov goes down, when open data really matters to the world. And maybe we’ll see the pirates starting to imitate us.

Posted in General | 3 Comments

A technology architect’s disclaimer

I’m working on a large application architecture for an international organisation right now, and am including this disclaimer in the introduction:

1.1. Limitations

Like all technology-related architectural designs, this is a vision of how XXX could work, not a blueprint for how XXX will work — we will change much or all of this architecture during the implementation, some in minor ways, and some in major ways, as we discover more about the problem space, change our priorities, and work around the strengths and limitations of the people and technologies involved. The architecture in this report points the way for the implementation team at the beginning, and captures our initial understanding and goals. The XXX team will endeavour to keep the specification up to date as the actual architecture emerges during implementation.

Further reading

Neal Ford, “Evolutionary architecture and emergent design: Investigating architecture and design.” IBM developerWorks, 24 February 2009.

Martin Fowler, “Who Needs an Architect.” IEEE Software, July/August 2003.

Eric S. Raymond. “The Cathedral and the Bazaar.” Version 3.0, 2000.

Posted in General | 1 Comment

The diffusion of innovations, or of products?

Rogers Bell Curve

If you work anywhere near IT, you’ve probably been confronted with Everett Rogers’ Bell Curve on more than one eager PowerPoint slide deck, explaining how the presenter’s new standard, product, or initiative will move through the five audiences of Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. There’s even a book, Geoffrey Moore’s 1991 Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, about how to move your product from the Early Adopter to the Early Majority (mainstream market) stage. After a (very) small amount of background reading, though, I’m starting to wonder if people are misuing Mr. Rogers’ curve, confusing the ideas of “innovation” and “product.”

You can’t invest in an idea

In 1962, Prof. Rogers, along with two other original collaborators, was studying how new practices spread among farmers in the US Midwest, not how, say, a new model of harvester found its market. In other words, they were studying what people do, not what people buy.

Does it matter? Consider the 1995 Netscape IPO, which started the dot.com bubble. The World Wide Web was a genuine innovation — a new way of doing things (building on existing technology) — and by 1995, it was clear that the Web was going to be important. However, people confused Netscape’s browser (a specific product) with the Web itself (an innovation), and assumed that by investing in the company that made the browser, they were investing in the Web. However, the two were not inseparable — Netscape’s browser tanked, while the idea of the Web grew stronger and stronger. Later, investors would pump up Apple’s and Facebook’s stock in a similar way, assuming that by doing so, they were investing in the innovations of mobile computing and social networking. Apple in particular has been able to deliver revenue to justify some of its stock price, but as it loses more and more of the smartphone and tablet market to Android manufacturers, the difference between innovation (mobile computing) and product (iPhone/iPad) becomes very clear.

Despite all that, most of the time I see people using Rogers’ curve, they’re using it for a specific product, project, or initiative, not for new ideas or practices. Does it make sense in that context, or are we being as lazy and confused as the 1995 Netscape investors? Rogers based his 1962 book on over 500 studies of how ideas spread; it would be interesting to see if there’s similar research to back the claims of over-eager sales VPs and project managers.

Posted in General | 2 Comments

Open source’s new frontier

Today, I explored the bleeding edge of open source and mobile, to see what might be normal in 5+ years.

andors-trail

The open-source Android application Andor’s Trail is a top-down role-playing game, sort of like Colossal Cave or Zork with pictures (for my fellow old-timers). I tried it out on my Nexus 7 tablet and enjoyed it — the story line isn’t finished yet, but over two years, the community has built a strong and engaging game. But that’s not the point of this post (we already know how open source works).

This morning, I was in a coffee shop with just my phone, and decided to do the following:

  • Found Andor’s Trail on GitHub
  • Cloned it using AIDE (an Android-based IDE)
  • Tapped “Open Project”
  • Tapped “Run”
  • Opened the compiled app

Building open-source apps didn’t used to be this easy. In Linux, you generally install a long list of library dependencies first, then run some kind of configuration, then do a make, figure out what went wrong/is missing, do another make, go on a mailing list to find out what went wrong, download a different version of some of the build tools or libraries, try again, etc. until eventually (maybe a few hours or days later) you raise your arms in triumph and shout that you’ve succeeded in building the app. After that, you’ve earned the right to be smug on mailing lists and tell the noobs to RTFM when they have the same problems you did.

Is this a glimpse at the future of open source? Cloning, building, and installing a dev build (7.1dev) was so easy that a non-developer could do it; on the difficulty scale, I’d say that it’s easier than figuring out how to upload a video or picture to Facebook.

Right now — despite the fact that Android itself is released under the Apache 2.0 license — open source in the mobile world is still in its infancy. We do have the F-Droid app store for Android, stocking only open-source apps. However, what if mobile itself became the predominant open-source development platform, with new types of tools, collaboration, and social cultures? Keep your eyes on this one.

Posted in General | Leave a comment