Heard this first applied to theories in Physics. Any not-over-complicated theory that effectively explained the facts was – as I understood – considered an “elegant” theory. (My original interest – and college degree – was in Physics. I wanted to build starships.)
Came up, long ago, with rules for “elegant” distributed applications. The first significant distributed application I spent time with was the early FileNet system. This was well over twenty years ago. Strong interest in scaling to large (by the standards of the time) deployments, forced careful thought about performance. Pretty much everything I have worked on before and since has had a network in the middle, so I’ve had time and reason to think about the subject. Changing technology does not change the inherent nature of distributed systems, so the relevant set of notions will pretty much always apply.
(What continues to surprise me is that individuals and outfits still get these same bits wrong!)
In the interest of hitting the usual points once….
When building distributed applications, there are some basic principles you should always keep in mind.
- Minimize the amount of data crossing the network.
- The capacity of the network is always limited. Capacity is large in the usual development setup, when both server and client are on the same segment. In real use the available capacity is almost always less, and sometimes much less.
- Minimize the number of round-trips across the network.
- Networks always have latency. This has nothing to do with the speed of a network (in terms of bits/second). I have written about this before. For an interactive application the ideal is one round-trip per user action.
- Shift computation from the server to the clients, where practical
- There are almost always more clients than servers. If you can shift computation to the clients, you will get better overall throughput.
From the above principles, for building web applications you can derive further guidelines.
- Code complexity belongs on the server. Code in the client should be simple.
- Large iterations belong mainly on the server, not the client.
- Compiled code is usually much more efficient than interpreted code. The server can (or should) use compiled code. In the case of web applications, the client code is interpreted code.
- Large data belongs on the server, not the client.
- The (sometimes) narrow network channel, and the relative efficiency of compiled code – both argue for keeping large data on the server.
- Use the strengths of the web browser.
Some quotes, with names omitted to protect the guilty.
Arrays have no semantics. They are not first-class collections. Do not use them in any public API, regardless of the language you use. Wrap arrays with a public type that exposes semantics.
The semantics of arrays as collections and iterations are simple and perfectly suited for small scripts. For the most part, you do not need anything more. The domain for scripting is small, concise, and hideously flexible code. More elaborate solutions might make sense in large server-side code. Client-side script or structures shipped between client and server should only be as elaborate as is needed – and no more.
You’re right in the sense that JS is “good enough” for most of basic usages, but almost useless for writing bigger software. It’s the reason why there’s been recently a lot of higher level languages that generates JS code. Either Java (GWT) or haXe (http://haxe.org)