Showing posts with label Architecture. Show all posts
Showing posts with label Architecture. Show all posts

Wednesday, April 23, 2014

Conceptual Integrity Part 1 ... or Why Committees Can't Do Doodly

Last night at DevTalk LA (that Geeky local book club), having finished our book, we read two articles. We do this in-between books, to allow members a little extra time to get the book in-hand. Next week, we'll start the eagerly anticipated RESTful Web APIs by Richardson & Amundsen.

Add caption

This book appears to be a complete rewrite of their earlier RESTFul Web Services (2007). It was, by universal acclamation of Dev Talk LA, one of the better books we've recently read, and advance opinion holds the re-write to take things on the next level and clear up all our questions.

The two articles can be (and should be!) read on-line here:

We spent most of the time on Paul Graham's essay.

If you haven't read Paul Graham, you need to stop reading here and get on with it. Buy his book. Read his essays. Just do it. The short-version of why is this: Paul Graham is a twice-successful Startup enterpreneur who built his success upon technical excellence. He's a big proponent of the rapid development he found possible through the use of Lisp, of which his is an advocate and presently is designing/promoting Arc, a Lisp derivative of his own. But there are two additional factors that make him a compelling figure.

First, he's done something other than technology. He's also a painter. And the interplay of what he has learned from both shows clearly in his very insightful and reflective essays. Secondly, he understands, in a very no-nonsense way the value and importance of passion. From last nights essay, for instance, came this timeless quote:
To make something good, you have to be thinking, "wow, this is really great," not "what a piece of shit; those fools will love it." 
Which brings us to the point of today's essay.

Conceptual Integrity.

It's important.

Really important.

Paul Graham gets it, too.

As he puts it, (highlights mine)
Notice all this time I've been talking about "the designer." Design usually has to be under the control of a single person to be any good. And yet it seems to be possible for several people to collaborate on a research project. This seems to me one of the most interesting differences between research and design. 
There have been famous instances of collaboration in the arts, but most of them seem to have been cases of molecular bonding rather than nuclear fusion. In an opera it's common for one person to write the libretto and another to write the music. And during the Renaissance, journeymen from northern Europe were often employed to do the landscapes in the backgrounds of Italian paintings. But these aren't true collaborations. They're more like examples of Robert Frost's "good fences make good neighbors." You can stick instances of good design together, but within each individual project, one person has to be in control. 
I'm not saying that good design requires that one person think of everything. There's nothing more valuable than the advice of someone whose judgement you trust. But after the talking is done, the decision about what to do has to rest with one person.
Why is it that research can be done by collaborators and design can't? This is an interesting question. I don't know the answer. Perhaps, if design and research converge, the best research is also good design, and in fact can't be done by collaborators. A lot of the most famous scientists seem to have worked alone. [ed. note: see my "Never Hire The Greatest Scientist The World Has Ever Known"] But I don't know enough to say whether there is a pattern here. It could be simply that many famous scientists worked when collaboration was less common. 
Whatever the story is in the sciences, true collaboration seems to be vanishingly rare in the arts. Design by committee is a synonym for bad design. Why is that so? Is there some way to beat this limitation? 
I'm inclined to think there isn't-- that good design requires a dictator. One reason is that good design has to be all of a piece. Design is not just for humans, but for individual humans. If a design represents an idea that fits in one person's head, then the idea will fit in the user's head too.

I wanted to collect these ideas into one place and last night was a wonderful impetus to do so.  But this posting is already too long, so it looks like we have to kick off a series. And what a wonderful topic for a series! Conceptual Integrity. And, I'm very happy to have one of my heroes, Paul Graham, give as forceful and thoughtful a kickoff as could be imagined.

I Remain,

TheHackerCIO







Tuesday, October 29, 2013

New Wine or Technology In Old Bottles!

"Neither do men put new wine into old bottles: else the bottles break, and the wine runneth out, and the bottles perish: but they put new wine into new bottles, and both are preserved." [ref]

New technology used in old ways also breaks them and leads to pathological systems. The best summary and critique of this pathology, as it exists in Architecture (not Software or Enterprise Architecture, but real, putting-up-buildings Architecture), is this passage, about the Parthenon:
"The famous flutings on the famous columns---what are they there for? To hide the joints in wood--when columns were made of wood, only these aren't, they're marble. The triglyphs, what are they? Wood. Wooden beams, the way they had to be laid when people began to build wooden shacks. Your Greeks took marble and they made copies of their wooden structures out of it, because others had done it that way. Then your masters of the Renaissance came along and made copies in plaster of copies in marble of copies in wood. Now here we are making copies in steel and concrete of copies in plaster of copies in marble of copies in wood. Why?"
Unfortunately, this happens all too often in our world of technology. Consider, as an example, one great Bloated Behemoth Enterprise, whose technology needs were well in place and large-scale even thirty or forty years ago, in the days when Mainframes bearing Tape-drives ruled the earth. 

A computer-historical perspective is helpful here, and luckily TheHackerCIO has spent some time both talking to the old-timers (one goes to our local Users Group) and reading about the bad-old-days. In those ancient times, programs were structured around the tape based file system. A typical program would read, as input, the Customer Master tape, which contained an entry for every customer, and a second tape input -- let's say a New Orders tape -- containing a row for each new order needing processing, and already sorted by customer Id. 

The program, then would read a customer order, then advance the Customer Master file until it located this customer, pull out the data it needed to complete the order, and then record the finalized purchases in yet another tape. Note that when building a new system in this kind of ecosystem, one must always source things from existing files, stored on tapes. At the end of job-run, a new tape has been produced, which is input to the next job. And these jobs are all run on a particular schedule, carefully contrived, and supervised by the operators. 

Now enter the Relational Database. The point of this technological innovation, and the internal genius of its principles,  was to have a Master Database for the enterprise. Instead of files, all data would be stored in tables. Now, updates could be made transactionally to the system of record in real time, at the same time as others were querying that same data to determine how things were changing. To take our example, as new orders were placed, it was now possible to obtain the necessary data from the CUSTOMER Table, and create a row in a NEW_ORDERS Table to handle it. As the RDBMS evangelists put things, this tool allowed for:

  • reduced data redundancy
  • increased data availability
  • increased data security
And there was a whole methodology for properly "normalizing" the data, and consequently eliminating the double-update problem, and eliminating concern with determining the system-of-record, by replacing it with one universal system. 

Unfortunately, the Bloated Behemoth Enterprise dealt with Databases differently. They saw them as Yet-Another-Tape-Based-File System. 

And so, each new development project, at its commencement, began life by creating it's own database, just as they would have defined a new Tape-File layout. They sourced them by creating unload jobs, just as they would have unloaded a Tape and loaded the data into their new Tape File layout. They put new wine into old bottles.

And after thirty years of such accretions, they now have tens of thousands of databases, in every dialect of SQL possible, scattered across myriad platforms, with a tangled web of sourcing one database from the unloaded output of another, all kept in several "AutoSys" style batch-job schedulers, so that the proper, and necessary order of loading can take place. 

Which is to say -- for those unacquainted with this kind of BBE -- that at a particular time in the evening (typically midnight) the online systems are brought down, the databases are all quiesced, backups are taken, then crucial batch jobs run, many of which consist of unload jobs to extract data from one database, just as if it were a Tape, and load up another. 

As time has progressed, this batch window grows longer and longer, to progressively consume the evening -- not to mention the disk space available. 

This is a perfect example of the pathology of putting new technology into old bottles. 

And, en passant, it is an example of why Architecture must never adopt the "timeless (and thoughtless) way of building" that merely tinkers with using new things in the same old way, without spending the time to ensure that a proper understanding of the new way is properly adopted and promulgated. 

Consider this a Cautionary Tale! Always seek to know and find the inner logic of a new technology. Always seek to ensure that New wine get's the proper new bottle it needs. Otherwise, you'll want to get drunk when you see the results. 

I Remain,

TheHackerCIO



Monday, October 28, 2013

Software Architecture



Today, even the notion that there should be software architecture is up for debate! Regardless of what verdict one arrives at,  it is a good thing to examine whether something should exist prior to investing time studying it.

[editorial note: this post commences an occasional series on Software and Enterprise Architecture, in which I hope to collect all the principles into one convenient place. We begin at the beginning: should there even BE a discipline of software architecture? ]

There are a number of "nay-sayers." But I think they can all be classified as variants on the notion of "evolutionary design," which contrasts strongly with the notion of engineering, design, and deliberative architecture. The idea is that design is an emergent property that will manifest itself as the work unfolds. The Wikipeida article puts it this way:

Emergent design in agile software development[edit]

Emergent design is a consistent topic in agile software development, as a result of the methodology's focus on delivering small pieces of working code with business value. With emergent design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile release cycle, development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a smaller code base, which naturally has less room for defects and a lower cost of maintenance.[1]
As emergent design is heavily dependent upon refactoring, practicing emergent design without a comfortable set of unit tests is considered an irresponsible practice.[citation needed]
I have to laugh at the last line: "practicing emergent design without .. .[snip] ... is considered an irresponsible practice." It reminds me of the Moliere comedy, Le Bougeoise Gentilhomme, where the man finds he has been speaking prose all his life and didn't even know it. Likewise, many a team has been practicing "emergent design" for decades and thought they just hadn't used an architect.

There are a lot of parallels here with methodology pathology. Just adopting an agile or iterative methodology is not a silver bullet.  Process does not automate success! [The Rothering Principle] TheHackerCIO has seen plenty of projects with churning iterations that never seem to get anywhere; just as he has seen Waterfall projects that can't get out of one phase and into another, or that allow phase leakage. Process alone cannot ensure the absence of pathology. The Capability Maturity Model of Carnegie-Mellon seemed like a good thing, and I was a proponent of it, until I worked with outsourcing companies in a particular subcontinent (which will remain unnamed), all of whom were CMM level-5 certified, and none of whom could deliver one useable module of code to our project. I'd love to have seen a post-mortem analysis of how their coding failures made it back into the "feedback-loop" of their CMM program!

One manifestation of this attack is the TDD methodology. Taken rigorously, it directly assails the need for architecture: you are to take your assigned User Story from your Product Backlog, and after coding your first test case -- mind you, not one line of actual code has yet been written!!! -- after coding that test case, you now enter the "red" state, and are to determine ... what?
  • The best way to structure your code to allow for future eventualities? No! 
  • The best approach to code this method, so that you can reuse existing code? No!
  • The most elegant set of methods that will allow for maximal flexibility? No! 
  • A higher degree of generality to accommodate use cases that are going to come? No!
The principle adhered to here is YAGNI!

In contrast to attempting to architect, engineer, or design, one is supposed to take the directest, clearest, fastest path to implement the user story, at least a portion of the user story, and nothing but the user story. You are to "just do it," as the Nike slogan goes, and when your test-case passes, you enter the "green" state. At this point, you are to consider refactoring the code you just wrote, to achieve whatever an Architect or designer might have attempted upfront.

This is a direct assault on the notion of architecture, or on the need for it.  There are others.

Next up: Patterns and Pattern Languages

I Remain,

TheHackerCIO