The name of the game: reduce cognitive burden. The method, in a nutshell:
"Function orientation" is a style of imperative programming that reduces side effects through heavy use of pure functions. The goal of FOP is to use only local side effects - local to a function, that is. This also means treating data as immutable, except locally. The following simple function illustrates the point:
lengths strs = lens =  for index i in strs lens[i] = strs[i].length lens
In a functional language, we would implement
recursion, but here we use a side-effecting for-loop, and the
lengths function is written in an
imperative style, but to the user it is a pure function. It does
not alter its input, nor its environment, and it does not depend
on any element in the environment that might change.
Point x y = this.x := x this.y := y Point.prototype.length = \ -> Math.sqrt # x*x + y*y Point.prototype.setX v = new Point v this.y Point.prototype.setY v = new Point this.x v
None of the prototype methods of Point alter the object's data. Instead, we return a new object with the desired properties. Nothing prevents us from defining all our objects in this manner, and little is lost in doing so. The one genuine concern is performance, but there should be little to worry about unless you have a large object (one with many members) that you are updating very frequently. Such a design sounds side-effect-ridden, and should probably be refactored. At worst, you isolate your side-effecting code as much as possible, and manage the effects with code conventions.
In imperative languages, we give names to containers for data. We
var x means "x
is a container," and
x = 5 means "put the value '5' into x." It
is not true that imperative languages allow us to change data
In a purely functional language, we give names to the data
itself. In Haskell,
x = 5 means "x is the value '5'". If we
x = 6, we get an error, since we just said x is 5,
and 5 is not 6!
What we really do in an imperative language is move our labels
around. If I call
obj no longer
represents the object it previously had; it now represents
a similar object whose name is "Bob". In a functional context,
where we name the data rather than the container, we would define
setName to return a new object, rather than alter its host.
Then we do this:
bob = obj.setName(Bob). Now we have two
different names for two different pieces of data.
Pure functions are better than functions with side effects for the same reason local variables are better than global ones: they make code easier to think about. If you pass an object as an argument to a function, and the function alters that object, you get no notice. There is no formal indication that anything changed at all. That's OK, as long as you remember that that's what the function does. But as applications grow, the cognitive burden caused by side effects grows with it. Pretty soon somebody new comes on to the project, and they not only have to learn the code base, they have to learn these unwritten logical relationships, too. No matter how many unit tests you write, this is just painful. Fortunately, it's also unnecessary.
Yes. Function-orientation isn't about having no side effects; it's about keeping side effects under control by reducing and localizing them. That said, it is possible express the effects of I/O as inputs and outputs of functions, rather than as "magic" effects that happen at arbitrary points in your code. See customized function chaining for more details.
It is often claimed that coding-by-convention is unreliable and "dirty." If a particular usage is expected, than it should be enforced by the interface; so goes the claim.
But conventions are essential to writing software, and nothing can truly be enforced. Nothing prevents me from feeding your code to a compiler or interpreter that treats private members like public ones, or doing any other thing I choose. It is only by convention that we use standard interpreters, or, in fact, the language's standard syntax! Languages themselves are merely collections of conventions.
Moreover, every project I've worked on has conventions specific to it, whose use is mandatory. These conventions are rarely machine-checked, instead being checked by humans during code reviews.
Haskell programmers tend to enforce upon themselves a strict, concise, highly abstract style that is all but incomprehensible to those of us who are not researchers. The work-a-day programmer cannot afford to adopt such a style.
In any case, you should always indicate the mutability of a variable with a coding convention.