/etc/hosts
). Here’s what I did:
dev.ourairports.com
pointing to 192.168.0.5./etc/hosts
file, hard-code dev.ourairports.com
to point to a 127.0.0.* IP address.With these steps, the domain dev.ourairports.com
will always work from my laptop, wherever it’s connected, and it will work from other devices when I am on my home WiFi (anywhere else, it will probably bring up something strange, like a router or printer login page).
I’ve played around with Docker a fair bit, but I’m still happy just running a bunch of VHOSTs for different webdev projects, and so far, it hasn’t caused me any grief.
Does anyone have better suggestions? Any gotchas I’ve missed?
]]>
Time: 1980s
Place: TV
Time: 2010s
Place: the Web
(To be continued …)
Update from Dustin DeWeese: “A working Emacs 25 package is available in Termux: https://termux.com
$ apt install emacs” — this is probably a better solution than the outdated distro below.
In February 2014, I posted instructions for installing Gnu Emacs in older versions of Android. These instructions no longer work in Android L (Lollipop), because of a new requirement that all executables be compiled with a position-independent (PIE) flag. I have managed to create a new binary from Michał Zieliński’s admittedly now out-of-date patched Emacs version, and have made it available on GitHub.
Since TerminalIDE also fails to run under Lollipop (lacking PIE), the instructions for installation are now somewhat different.
/data/
, where you can make a file executable and run it (you cannot normally make a file executable under /sdcard/
)./sdcard/emacs/
to hold your non-executable files, such as eLISP files (you may choose a different location, but you’ll need to set some extra environment variables so that Emacs can find your files; that’s outside the scope of this posting)./sdcard/
(your terminal emulator’s “home” directory if not rooted, or somewhere like /data/local/bin/
if rooted) and make it executable:
chmod 755 emacs.bin
/sdcard/emacs/
directory — you may need BusyBox installed to get the unlzma and tar utilities required for unpacking, e.g.
unlzma -c lisp.tlzma | tar xvf -
#!/system/bin/sh HOME=/sdcard TERMINFO=/sdcard/emacs/terminfo /data/local/bin/emacs.bin "$*"
If your chosen terminal emulator already has TERMINFO installed, then don’t set it in the script. You may choose any home directory you want, but it must be writeable (Emacs will want to create a $HOME/.emacs.d/
directory to save settings).
chmod 755 emacs
*.elc
files are earlier than dates on *.el
files after running Michał’s build scripts; you should probably fix that with something like this:
find /sdcard/emacs/lisp -name '*.elc' -print0 | xargs -0 touch
Any volunteers to package all this up neatly, perhaps with some terminal emulator, for distribution in the Google Play Store?
(From The Far Side, 1980s IIRC.)
In my 16 years working with data standards, I’ve found that standards almost always ask for too much and end up getting little or nothing. If we asked for less, might we get more? Do data standards have to be tightly-managed and dirigiste, or could we learn from the success of hashtags and other simple, collaborative approaches?
The blog post linked above describes the approach we’re taking in the multi-agency Humanitarian Exchange Language (HXL) initiative to help improve data-sharing during humanitarian crises — please take a look and let us know what you think. You can also visit our HXL Showcase site to see interactive examples of how you can analyse and visualise examples of real humanitarian datasets with HXL tags added (the public-domain source code is available on GitHub).
]]> Since I mentioned in a review that I’m using Emacs on my Android Nexus 7 tablet, I’ve received several requests for information about how I set it up. There is a prepackaged Emacs app in the Google Play store, but it does not work as distributed, because it bundles a broken terminal application. The Emacs binary itself is fine, though, and with a bit of manual work, you can install it to run inside a different terminal application (I use Terminal IDE below).
Note: Emacs needs a full-featured PC-style keyboard. I usually run Emacs on my tablet with an external Bluetooth keyboard; if you want to use a soft keyboard, consider installing the Hacker’s Keyboard, which has all of the modifier keys Emacs expects.
Request: Would anyone be willing to rebundle Emacs in an easily-installable form for Terminal IDE, and make this blog posting obsolete?
Install the Terminal IDE app on your Android device.
In your Android browser, go to http://emacs.zielm.com/data/ and download the files emacs.lzma
, etc.tlzma
, and lisp.tlzma
, then copy them to your Terminal IDE home directory.
Launch a shell in Terminal IDE and run the following commands:
$ unlzma emacs.lzma $ chmod 755 emacs
At this point, you should already be able to run emacs by typing
$ ./emacs
(though it will fail because it can’t find its etc directory). If you want, just put it somewhere on your executable path, like you would in any Linux installation.
Create a directory /sdcard/emacs/
(or make it somewhere different, if you’re willing to set environment variables to tell Emacs to look somewhere else).
Copy the downloaded files etc.tlzma
and lisp.tlzma
to /sdcard/emacs/etc.tar.lzma
and /sdcard/emacs/lisp.tar.tzlma
Change to the /sdcard/emacs/
directory and run the following commands:
$ unlzma etc.tar.lzma $ tar xvf etc.tar $ unlzma lisp.tar.lzma $ tar xvf lisp.tar
Test that emacs starts up OK now (run from wherever you installed the binary):
$ ~/emacs
If all is well, you can optionally delete the tar files to save space:
$ rm /sdcard/emacs/etc.tar /sdcard/emacs/lisp.tar
Enjoy Emacs in Android! You might consider doing an C-u 0 M-x byte-recompile-directory
on /sdcard/emacs/lisp/
(and any other lisp directories) to make sure you’re up to date.
The surprising part (for me) is that Evernote is exactly what I would have designed if you’d asked me to design a note-taking/idea-organising app. From my POV, the app did everything right:
Maybe the problem isn’t Evernote, but the idea of taking notes at meetings. I have piles of old engineering notebooks from past consulting gigs, and I doubt I ever went back and reviewed 0.5% of them after I wrote them.
I guess this is a good lesson for information architecture and tech design. I started with a process that didn’t work for me (paper-based note taking) and assumed that if I added some tech pixie dust to it — web apps, search, tags, smartphone app, etc. — it would suddenly change and become functional. That didn’t happen, because it almost never happens, and it’s not Evernote’s fault.
]]>You are welcome to use this as a chance to rail against the juvenile hijinks in the US Congress, but I think there’s a far more important lesson: if you depend on any centralised data source, even one run by the world’s richest and most-powerful government, it can fail and leave you cut off.
There is a proverbial story that ARPAnet, which later grew into the Internet, was designed to route around failed nodes so that it could keep functioning after a nuclear attack. Even if that story is not strictly true (the design had more to do with the unreliable networking hardware of the time), the actual networking layers of the Internet are highly failure-tolerant.
Information-freedom activist John Gilmore took that ARPAnet creation myth a step further, and argued that
The Net interprets censorship as damage and routes around it.
Just as the Internet (as a network) can route around damage, “The Net” (as a culture) can route around censorship using Internet-the-network as a tool. History has proven Mr. Gilmore right: the entertainment industry, for example, has entirely failed to control and restrict the distribution of movies and music online, and the US government — which could reduce dozens of other countries to ash with the push of a button — could do nothing to prevent the spread of the (unauthorized) WikiLeaks data.
The Internet consists of a large collection of specifications and standards that define and enable its ability to route around damage; there’s no similar set of standards for getting information around censorship barriers (whether related to intellectual property or restriction of basic human rights). So how does it work? Why can’t the music industry, for example, take a song offline once people have started sharing it? How does “The Net” route so-called “pirated” content around huge, angry corporations spending millions of dollars hiring lawyers and lobbying legislators?
The trick with content seems not to be routing, but replication. To survive online, a piece of information simply has to be copied faster than its opponents can take it offline. Because it’s possible to make perfect, lossless copies of digital content, it becomes irrelevant whether a copy is first generation or 10th generation. For example, if five people make copies of content, then five more people make copies of each of those copies, etc., by the 10th generation you have 510 — or nearly 10 million — perfect copies spread out around the world, and with extremely-popular content, that process can take place in minutes.
It’s likely that most Americans won’t suffer any real harm from today’s shutdown of data.gov: open data is still in its infancy. However, if we in the open-data community realize our hopes and succeed at making open data a critical part of how the world works, then the next shutdown could be far more harmful. Companies that rely on open data might have to close their doors and furlough employees; emergency responders in the field might have trouble helping victims of a flood or earthquake; maps or navigation systems might stop working; and so on. The more-successful open data becomes, the higher the cost of having it fail.
It would not be a complete disaster, however. A lot of the open data on data.gov exists in copies elsewhere, and if the site were to disappear, we could probably find copies of individual datasets on hard drives scattered around the world, and reproduce most of the data that was on it on 30 September. It would take time, and we wouldn’t know if the data was corrupt or fraudulent, but in most cases, it would probably be OK. As the world moved further and further beyond 30 September 2013, we’d also have to figure out how to get new data from the departments, offices, and organisations who had previously centralised their datasets in data.gov.
How can we make this recovery process easier? Let’s imitate the people who have already solved this problem: the so-called content “pirates.” We expect centralized open-data sites like data.gov to be available all of the time; the pirates expect their sources to vanish at any moment. We expect data providers to help and encourage us to use their data; the pirates expect legal action trying to shut them down. We get funding; they get fines or even sometimes go to jail. Yet they flourish, while we’re vulnerable to any government’s or organisations internal financial squabbling.
The answer is to copy, copy, copy, and copy. Make copies of all the open data you can find, share your copies with as many people as you can, and keep the copies somewhere safe, just as you would with MP3s of your favourite artist. Open-data sites that discourage bulk downloading need to rethink their priorities, but if they don’t, find a way around any barriers that they throw up. We need 1,000 sites providing the data.gov data, spread around the world, some publicly-funded, and some private. In a sense, a litigious recording company and a government-funded open-data site present exactly the same risks to their users, and we have to learn not to trust the availability of any single site.
But still, we’re not pirates. Unlike content piracy, open-data sharing can stay out in the daylight. Ministers and heads of state support us, international organisations and foundations fund us, and the media praise us. That means that we have the opportunity to get together and come up with real standards or specs for keeping open data available, just as the ARPANet founders did for network resiliency. These processes are hard, they will take time, and most ideas will go nowhere, but eventually, we could come up with something as useful as the collection of the Internet standards and less-formal, ad-hoc that allow you to get to this blog even around a broken router.
Working in the open also allows us to address issues of trust that are difficult to deal with in the piracy world. If you download an unauthorised copy of Microsoft Word, for example, how do you know that it’s authentic? Is it going to introduce malicious software onto your computer? If you download an unauthorized movie Disney movie, how do you know that it won’t suddenly flash a Goatse on the screen at minute 51?
In the open, we can talk about how to sign digital content and build a web of trust, so that you can rely on a US government dataset even if you loaded it from a Russian web site. We can talk about standardizing how open-data sites notify other systems about new or updated datasets (e.g. using RSS or Atom), so that sites can easily and automatically mirror one-another. And we can talk about discarding — in our field — broken concepts like the Creative Commons attribution licenses, which actually discourage sharing and using open data.
If we get this stuff right, we’ll be ready the next time data.gov goes down, when open data really matters to the world. And maybe we’ll see the pirates starting to imitate us.
1.1. Limitations
Like all technology-related architectural designs, this is a vision of how XXX could work, not a blueprint for how XXX will work — we will change much or all of this architecture during the implementation, some in minor ways, and some in major ways, as we discover more about the problem space, change our priorities, and work around the strengths and limitations of the people and technologies involved. The architecture in this report points the way for the implementation team at the beginning, and captures our initial understanding and goals. The XXX team will endeavour to keep the specification up to date as the actual architecture emerges during implementation.
Neal Ford, “Evolutionary architecture and emergent design: Investigating architecture and design.” IBM developerWorks, 24 February 2009.
Martin Fowler, “Who Needs an Architect.” IEEE Software, July/August 2003.
Eric S. Raymond. “The Cathedral and the Bazaar.” Version 3.0, 2000.
]]>If you work anywhere near IT, you’ve probably been confronted with Everett Rogers’ Bell Curve on more than one eager PowerPoint slide deck, explaining how the presenter’s new standard, product, or initiative will move through the five audiences of Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. There’s even a book, Geoffrey Moore’s 1991 Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers, about how to move your product from the Early Adopter to the Early Majority (mainstream market) stage. After a (very) small amount of background reading, though, I’m starting to wonder if people are misuing Mr. Rogers’ curve, confusing the ideas of “innovation” and “product.”
In 1962, Prof. Rogers, along with two other original collaborators, was studying how new practices spread among farmers in the US Midwest, not how, say, a new model of harvester found its market. In other words, they were studying what people do, not what people buy.
Does it matter? Consider the 1995 Netscape IPO, which started the dot.com bubble. The World Wide Web was a genuine innovation — a new way of doing things (building on existing technology) — and by 1995, it was clear that the Web was going to be important. However, people confused Netscape’s browser (a specific product) with the Web itself (an innovation), and assumed that by investing in the company that made the browser, they were investing in the Web. However, the two were not inseparable — Netscape’s browser tanked, while the idea of the Web grew stronger and stronger. Later, investors would pump up Apple’s and Facebook’s stock in a similar way, assuming that by doing so, they were investing in the innovations of mobile computing and social networking. Apple in particular has been able to deliver revenue to justify some of its stock price, but as it loses more and more of the smartphone and tablet market to Android manufacturers, the difference between innovation (mobile computing) and product (iPhone/iPad) becomes very clear.
Despite all that, most of the time I see people using Rogers’ curve, they’re using it for a specific product, project, or initiative, not for new ideas or practices. Does it make sense in that context, or are we being as lazy and confused as the 1995 Netscape investors? Rogers based his 1962 book on over 500 studies of how ideas spread; it would be interesting to see if there’s similar research to back the claims of over-eager sales VPs and project managers.
]]>The open-source Android application Andor’s Trail is a top-down role-playing game, sort of like Colossal Cave or Zork with pictures (for my fellow old-timers). I tried it out on my Nexus 7 tablet and enjoyed it — the story line isn’t finished yet, but over two years, the community has built a strong and engaging game. But that’s not the point of this post (we already know how open source works).
This morning, I was in a coffee shop with just my phone, and decided to do the following:
Building open-source apps didn’t used to be this easy. In Linux, you generally install a long list of library dependencies first, then run some kind of configuration, then do a make, figure out what went wrong/is missing, do another make, go on a mailing list to find out what went wrong, download a different version of some of the build tools or libraries, try again, etc. until eventually (maybe a few hours or days later) you raise your arms in triumph and shout that you’ve succeeded in building the app. After that, you’ve earned the right to be smug on mailing lists and tell the noobs to RTFM when they have the same problems you did.
Is this a glimpse at the future of open source? Cloning, building, and installing a dev build (7.1dev) was so easy that a non-developer could do it; on the difficulty scale, I’d say that it’s easier than figuring out how to upload a video or picture to Facebook.
Right now — despite the fact that Android itself is released under the Apache 2.0 license — open source in the mobile world is still in its infancy. We do have the F-Droid app store for Android, stocking only open-source apps. However, what if mobile itself became the predominant open-source development platform, with new types of tools, collaboration, and social cultures? Keep your eyes on this one.
]]>Maybe it’s time to start thinking like a business, and figure out how to turn those liabilities into assets. If PRISM brought real benefit to citizens’ lives, would they look on it more favourably? Would they, perhaps, even be willing to give money to the program?
With that in mind, I’d like to propose a new, pay-for-use government web site named prism-suggests.gov
. The web site will have a natural-language interface where people can ask questions about themselves or their friends and family, and get suggestions back. Here are some examples:
Q: What book should I read next?
PRISM: You bought Infinite Jest three months ago, but I’ve noticed that you always fall asleep when you try to read it on the subway on your way home. Perhaps something lighter, like Fifty Shades of Grey, would be better, especially since you go out of your way to read Facebook comments about it.Q: What kinds of clothes should I buy Maria for her birthday?
PRISM: Maria’s credit-card and Google-search history suggest that it’s 87.3% likely she has a bladder control problem, so it would be best to buy things that fit loosely around the hips, to leave room for an adult diaper.Q: Is Phillip going to ask me out on a date?
PRISM: Unless Phillip has been doing a “research project” for the last 3 years, the type of porn he views on his smartphone suggests that yours is not the gender that interests him.Q: How can I get a promotion at work?
PRISM: Casually mention the phrases “St Lucia,” “offshore investment,” and “IRS” loudly in hearing range of your boss, then come back a week later and ask for the promotion.
If the site charged $5 for each question, imagine how fast the US federal deficit would shrink! And, of course, all those people whining about “privacy rights” and the so-called “Constitution” would be drowned out by billions of satisfied customers.
]]>The Economist wonders why English has borrowed so few new words from China over the past 30 years, despite the country’s rise as a major world economic power. The Hermit Kingdom isn’t a hermit any more — Chinese government and business are everywhere, not only trading money for influence and business opportunities in the developing world, but also buying up American companies in sectors like general aviation.
Unlike the European colonial powers of 150 years ago, however, China does not seem interested in exporting its cultural values; for example, Beijing may donate a fleet of Chinese-made cars to a Latin-American government, but it doesn’t also set up missionary schools to try to teach the country’s elite to speak Mandarin and follow Confusian principles.
Can Chinese language and culture spread on its own, without a deliberate attempt to promote it, the way Europeans once promoted English, Spanish, Dutch, French, or Portuguese language and culture in the Americas, Africa, and Asia? Would China even want its culture to spread (beyond just cuisine)?
One interesting model is Japan, which has also made little attempt to promote its language outside its borders. For decades, while Japan rose as a world economic power, little happened. Then, suddenly, Japanese culture exploded on the rest of the world. American students, for example, not only eat sushi, but read manga, sleep on futons, and use Japanese terms like “panchira” to classify their pornography. And they’re often prone to take Japanese language classes, despite the complete lack of Japanese-sponsored missionary schools.
Now, we’re starting to see the same thing happen with South Korean culture. We might not be borrowing many new Mandarin words into English right now, but will that still be true in 20 years?
]]>Even though big web sites use lots of so-called “enterprise” technology, and big companies and government departments create browser-based applications, there’s still a huge chasm between web developers and enterprise developers.
We’ve been flooded with clichés and stereotypes about both sides — enterprise developers do everything the hard way, web developers don’t understand security and reliability, etc. — but it’s best not to take that too seriously. A lot of my consulting work and personal projects straddle the line between Enterprise and Web, so I’ve had a decade and a half to observe people and processes on both sides.
I’ll post more about this later, but here are two differences that strike me right away:
Enterprise IT projects are always about integration. We’re not talking about fun integration with a REST API on the web, but nasty, ugly integration with legacy systems as old as your parents, using custom data formats and unpronounceable character encodings out of the Mad Men era, like EBCDIC (if you’re lucky).
In web dev, whether you’re using SQL or a noSQL approach, you almost certainly own and manage your application’s data (unless you’re building one of those doomed Twitter or Facebook mashups). In an enterprise project, most of your data is coming from somewhere else (the 1970s mainframe at the Oakland data centre, the 1995 PowerBuilder app used by 350 analysts in Hong Kong, etc.). It comes veeeeery slooooowly, and it’s unreliable, and it’s almost guaranteed to be out of sync with the data you’re receiving from other sources (so forget about strict referential integrity). There’s nothing you can do about that, because huge parts of the enterprise are based around those legacy systems, and there’s no one left alive who knows how to change them anyway. Your whole $50M system might depend on data sent as a CSV email attachment every Tuesday night, and rejected 55% of the time because it’s malformed.
There’s a lot of snake oil out there that promises to “fix” this problem — ESB products, ETL products, WS-* products, etc. — but these all address the easy parts, near the middle, not the hard parts, at the edges (and sometimes they make even the middle more difficult than it needs to be).
The benefit of all this mess, though, is that an enterprise application designer is always thinking about distributed data, something that web developers talk about don’t always really get. It’s hard to imagine a CMS like Drupal or WordPress — that naively assumes it can keep all the information that it presents in its own (preferably MySQL) database — coming out of the enterprise.
Really, it’s true. Developers working for government, or Fortune 500 companies, on average, aren’t very good. Of course, there must be some real talent hiding here and there, but on balance, coding for most enterprise employees (as opposed to outside contractors) is a 9–5 drudge job that they’re happy to leave at the end of the day. They’re nice people, but they’re not passionate about IT the way you and I are, and they’re not interested in becoming so.
This talent deficit has pretty serious implications for building projects in-house and for maintaining projects from any source — it means that enterprises micro-manage their developers in a way that a hotshot web developer would never tolerate. Part of that is just the overhead of working in a big team — even web companies do code reviews and write detailed requirements when they get big enough — but a lot of it is just a matter of not trusting developers to do the right thing on their own. There are huge numbers of tools out there to count, manage, audit, poke, prod, and otherwise abuse enterprise developers, and those tools are more widely-available in Java than in any other environment, hence the enterprise’s love of Java.
It’s hard to know where the fault is here: would good developers work for enterprise if the working conditions were better, or would they still run off to small startups or consulting for the variety and adrenaline rush? Would bad enterprise developers grow into average or even good ones if they were given more trust and autonomy? In any case, if you’re designing an application for enterprise, don’t expect things that seem trivially simple to you to seem simple to the developers.
The result of all this is that, even if you have a hotshot team of consultants and developers initially building an enterprise system, you have to design it so that mediocre technicians and developers can maintain it for the 10-30 years after you all leave. The enterprise has to be able to hire people with (generally useless) certifications as “Sharepoint specialist,” “Oracle DBA”, or whatever, and the system has to contain few surprises for them. Nothing cutting-edge, please, because they probably didn’t cover it in their certification courses.
This post isn’t really about Feedly, though, but about open specs and standards. The reason I can still keep reading all the same blogs, newspaper headlines, and status updates is that Google Reader didn’t control them — they’re publicly available on dozens or hundreds of independent web sites, all following the same set of simple, free syndication standards. Even the way Feedly imported my list of feeds is standards-based.
If I’d been reading all that information on Twitter, Facebook, Google+, Tumblr, or any other similar proprietary, centrally-controlled service, I wouldn’t be able to rebound so easily; even worse, if I’d been relying on one of those locked-in services to publish my content, it would have stolen my audience as well. Think of it as the equivalent of your ex knowing your bank card PIN, and emptying your accounts before running off.
I’ll stick with Feedly as long as we’re both having fun, but as soon as the relationship gets stale, we can part ways as friends and move on. While there may be only one Twitter or Facebook, there are lots of RSS/Atom feed readers out there.
]]>Think – how often do you hear of someone dropping a laptop computer into a toilet or leaving it in a restaurant? How often do you see spider-web cracks on a laptop screen from a fall? Small, hand-sized computing devices are much-more likely be involved in accidents than larger ones, because we carry them around with us all the time, in pockets, purses, glove compartments, etc.
Reading this list (via John Hardy on G+), I think the same kind of distinction applies to long firearms vs handguns – handguns seem to have a lot more stupid/accidental discharges than long firearms like shotguns and rifles, because they’re the smartphones of the gun world. The problem is that the consequences can be much more serious than having to leave a smartphone in a bowl of rice overnight to dry out.
]]>«Pure science research drops sharply at National Research Council» (Ottawa Citizen, 2013-05-08)
We’re learning the danger in Canada of having let too many sectors become dependent on federal money. When the arts, NGOs, scientific research, many kinds of R&D business, and even our main national broadcaster all rely on government subsidies or funding, they’re all immediately vulnerable to any ideological shift in the government that doles out their money.
The Liberals created this dependency to ensure that the sectors would reflect their (central-Canadian, urban, moderate left) values in the 1990s, 70s, 60s, and earlier, but in doing so, they’ve handed a powerful weapon to their (western, suburban/rural, moderate right) successors and ideological opponents.
The US system has many problems of its own, but its relatively-independent arts sector — especially film, TV, and music — and the widespread existence of private universities and foundations, ensures that their government can’t entirely turn off the tap for arts, research, or NGO work that it doesn’t like.
]]>The question came up after this Twitter exchange with Walter Robinson, an intelligent and moderate small-c conservative blogger and tweeter who later became a lobbyist for the pharmaceutical industry:
The irony, of course, is that Mr. Robinson himself was accusing the Ottawa Citizen of not including enough context in what it wrote.
I believe that it’s right that someone should include a disclaimer when there’s a clear conflict of interest (e.g. you’re being paid specifically to defend an industry), but to be fair to Mr. Robinson, how, exactly, can you do that in 140 characters? “Disclaimer: I am a paid lobbyist for the pharmaceutical industry” would use almost half of the space available in any tweet.
Mr. Robinson suggested in his reply above that there’s no hidden conflict, because someone can visit his web site or read old tweets and see who he works for; but if someone retweets his tweet, etc., they don’t have that context, and it looks like this is independent evidence from an informed pundit who wanted to weigh into the discussion.
Given the limitations of Twitter, I think the right solution is to flag your tweet with a disclaimer hashtag that indicates that it is not independent (that doesn’t mean that it’s dishonest or misleading; just that you’re in a potentially ethically-compromised position). The hashtag wouldn’t tell anyone exactly what the conflict is, but it would at least tell them that they should go looking. Then, a person could (for example) look at Mr. Robinson’s Twitter profile, visit his personal site, and see that the pharmaceutical industry pays him.
Do any such tags already exist? They need to be short, and not conflict with an existing popular tag. Here are some suggestions: “#COI
” (conflict of interest), “#conflict
“, “#disclaimer
“, “#disclaim
“, or even “#paid
“.
Suggestions?
(For the record, I believe that what Mr. Robinson wrote in his original Tweet is true, but I still think it’s important to disclose a major conflict in the tweet itself.)
This is an important feature for any web repository that’s shared among multiple users or client systems. For example, let’s say that User A wants to upload a file named funny-cat-pic.jpg
and User B also wants to upload a file named funny-cat-pic.jpg
.
Using HTTP PUT, they can both specify that it belongs in /pix/funny-cat-pic.jpg
, but then the second one will simply overwrite the first one.
Using HTTP POST, on the other hand, they could each upload their files, and then receive HTTP 303 (See Other) 201 (Created) responses providing the URLs that the WebDAV server chose, e.g. /pix/funny-cat-pic-001.jpg
and /pix/funny-cat-pic-002.jpg
. The server could guarantee that a POSTed file never overwrote an existing one.
I know I can just write a short Python or PHP script to do the posting, but before I do that, am I missing anything? Supporting POST seems like it should have been an obvious choice from the start.
Update: Thanks to Tim Bray for the HTTP status correction. Tim also mentioned that AtomPub supports POST, but not, unfortunately, hierarchical organisation of resources (e.g. directories).
]]>