C. Keith Ray

C. Keith Ray writes about and develops software in multiple platforms and languages, including iOS® and Macintosh®.
Keith's Résumé (pdf)

Monday, December 30, 2013

Proposed Assembly Language Instructions (mid-1980's humor)

Imagine the computer-room of old: non-removable disk drives that are the size and shape of a dishwasher. Tape drives spinning like in those old black-and-white movies. Punch card input. Paper-tape for input/output. Form-feed printers with paper 17-inches wide. And the computer operator in white lab coat who controls access to the room-sized computer and loads tapes for you. In that world, these assembly-language instructions are funny. 

I'm not quite old enough to have experienced that world directly. I've never actually seen a paper-tape IO device, and I never actually wrote code that used those big magnetic tapes for input/ouput. (I did use cassette-tape storage with a ZX-81. That's very different.)

BH Branch and Hang
TDB Transfer and Drop Bits
DO Divide and Overflow
IIB Ignore Inquiry and Branch
SRZ Subtract and Reset to Zero
PI Punch Invalid
FSRA Forms Skip and Run Away
SRSD Seek Record and Scar Disk
BST Backspace and Stretch Tape
RIRG Read Inter-Record Gap
UER Update and Erase Record
SPSW Scramble Program Status Word
EIOC Execute Invalid OpCode
EROS Erase Read-Only Storage
PBC Print and Break Chain
MLR Move and Lose Record
DMPK Destroy Memory-Protect Key
DC Divide and Conquer
EPI Execute Programmer Immediate
LCC Loud and Clean Core
HCF Halt and Catch Fire
BBI Break on Blinking Indicator
BPO Branch on Power Off
AI Add Improper
ARZ Add and Reset to Zero
RSD Read and Scramble Data
RI Read Invalid
RP Read Printer
BSP Backspace Printer
MPB Move and Pitch Bits
RNR Read Noise Record
WWLR Write Wrong Length Record
RBT Rewind and Break Tape
ED Eject Disk
RW Rewind Disk
RDS Reverse Disk Spin
BD Backspace Disk
RTM Read Tape Mark
DTA Disconnect Telecommunication Adapter
STR Store Random
BKO Branch and Kill Operator
CRN Convert to Roman Numerals
FS Fire Supervisor
BRI Branch to Random Instruction
PDR Play Disk Record
POS Purge Operating System
USO Unwind Spooled Output
EPSW Erase Program Status Word
PMT Punch Magnetic Tape
AAIE Accept Apology and Ignore Errors

Laws of Computing (circa mid-1980's)

First Law of the Computer: I am a computer. I am dumber than human, and smarter than a programmer.

Lloyde's First Law: every program contains [at least] one bug.

Eggleston's Extension Principle: Programming errors which would normally take one day to find will take five days to find if the programmer is in a hurry.

Gumperson's Lemma: The probability of a given event happening is inversely proportional to its desirability.

Weirstack's Well-Ordering principle: the data needed for yesterday's debug shot must be requested no later than noon tomorrow.

Proudfoot's Law of the Good Bet: if someone claims that you can assume the input data to be correct, ask them to promise you a dollar for every input error.

Fenster's Law of Frustration: if you write a program with no error-stops or diagnostics, you will get random numbers for your output. (This can, incidentally, be used to an advantage.) However, if you write a program with 500 error-stops or diagnostic messages, they will all occur.

The Law of the Solid Goof: In any program, the part that is most obviously beyond all need of changing is the part that is totally wrong.
Corollary A: No one you ask will see it either.
Corollary B: Anyone who stops with unsought advice will see it immediately.

Wyllie's Law: Let n be the number of last category-1 job run at the computer center, then the number of your job is either n+1 or n+900.

O'hane's Rule: The number of cards in your deck is inversely proportional to the amount of output your deck produces. [FYI: it was one line of code per card in ye olde days of programming.]

Mashey's First Law: if you lie to the assembler, it will get you.

Mashey's Second Law: if you have debugging statements in your program, the bugs will be scared away and it will work fine, but as soon as you take away the debugging statements, the bugs will come back.

The Law of Dependent Independence: It is foolhardy to assume that jiggling k will not diddle y, however unlikely.

The Law of Logical Incompatibility: all assumptions are false. This is especially true of obvious assumptions.

Velonis's First Law: the question is always more important than the answer.

Velonis's Second Law: when everything possible has gone wrong, things will probably get worse.

Velonis's Third Law: the necessity for providing an answer varies inversely with the amount of time the question can be evaded.


Tuesday, December 3, 2013

Diana Larsen on Changing Your Organization


(Originally posted 2003.Apr.30 Wed; links may have expired.)

Diana Larsen's article on change and learning (and XP) http://www.cutter.com/itjournal/change.html 12 pages (PDF).

She quotes Beckhard's formula for change (my paraphrasing): If dissatisfaction with status quo, plus desirability of change, plus clear definition of what/how to do, are greater than the resistance to change, then you can achieve the desired change.

She says to encourage change, market it by increasing awareness of the problems with the status quo (I see a risk of being called names like "negative" or "not a team player") and the communicating the desirability of getting to a better situation. "When you are implementing change, there is no such thing as too much communication."

Some of this runs counter to Jerry Weinberg and another (name forgotten) book. Jerry says "don't promise more than a 10% improvement." A manager doesn't want to admit that more improvement is possible, because then they would have to admit that they were not doing a "good job" before. The (name forgotten) book pointed out that too clear a picture of the future can be paralyzing because people can see the perceived drawbacks of that situation too visibly, while not appreciating the benefits.

She writes "XP has the advantage over many change efforts in that fast iterations build in the feedback loop for short-term success. While floundering through the chaos, nothing bolsters the participants in a change effort like the sense of progress from a quick 'win.'"

Larsen recommends Chartering to start a project, and agrees with Lindstrom and Beck on "Hold two-day professionally facilitated retrospectives each quarter." (And at project end.)

Change takes time. "Putnam points out the need for patience with change efforts as he maps out six months' worth of defect tracking and shows its consistency with Satir's [change] model. He notes that if you had an evaluation of success or failure after three months, you might have come to an erroneous conclusion."

Also check out Rick Brenner's "Fifteen Tips for Change Leaders" here: http://www.chacocanyon.com/essays/tipsforchange.shtml



Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

Thursday, November 28, 2013

Relationships, Traditional vs Lean Training


(Originally posted 2003.Apr.29 Tue; links may have expired.)


Very funny web-page: "Things my girlfriend and I have argued about "http://homepage.ntlworld.com/mil.millington/things.html. This is one bit my wife and I found funny:"Just for reference; if Margret returns from having her hair cut and says, 'What do you think?' and you reply, 'I'd love you whatever your hair was like,' well, that's very much The Wrong Answer, OK?"

Rus Rufer, on the IXP mailing list, mentioned two lists comparing traditional and lean project manager training that were in a draft of Lean Software Development, but which did not make it to the final version:

Traditional Project Manager Training
  • Software Development Life Cycle (SDLC)
  • PMI Knowledge Areas
  • Schedule and Cost Estimating
  • Critical Path Analysis, PERT/Gantt Charts
  • Using Project Management Software
  • Project Scope and Change Control
  • Project Tracking and Schedule Control
  • Testing and Quality Assurance
  • Deployment and Support
  • Risk Management
  • Resource Management
  • Contract/Vendor Management

  • Lean Project Manager Training
  • Seeing Waste
  • Value Stream Mapping
  • Feedback
  • Iterations
  • Synchronization
  • Emergence
  • Options Thinking
  • Last Responsible Moment
  • Set-Based Development
  • Pull Systems
  • Queuing Theory
  • Cost of Delay
  • Self Determination
  • Motivation
  • Leadership
  • Expertise
  • Perceived Integrity
  • Conceptual Integrity
  • Testing
  • Refactoring
  • Contracts

  • On the XP mailing list, there has been some unhappiness at the name "Industrial XP", fearing that it will divide the XP community, and perhaps weaken attempts to "sell" XP into companies.

    The IXP web page says "Industrial XP is tuned to handle the needs of large scale, mission critical and enterprise applications" which could be taken to imply that "Classic" XP (I might get some hate-mail for that name, which I didn't make up) hasn't had success in mission critical and enterprise applications (which would be wrong). I think the emphasis I heard at the BayXP presentation, that IXP is turned for "highly political organizations" is actually the correct differentiator between IXP and "Classic" XP, but that doesn't make the best advertising copy.

    Ron Jeffries would like the IXP web page to say something like this:

    IXP is Extreme Programming.

    Extreme Programming, like any good software development method, is always adapted to the context. As a project gets more connections into the enterprise, it needs different practices, and for best results, these need to be consistent both with the enterprise needs and the principles and values of Extreme Programming.

    XP and Agile software leaders, including Industrial Logic, have been applying Extreme Programming to larger scale, mission-critical, distributed, and highly-coordinated projects for some time now. We offer here a summary of the approaches and practices that we have used, and that our colleagues have used, in adapting XP to larger-scale situations.

    And, Mark Simmonds wants us to know that DSDM 4.2 (not yet released), which blends DSDM and XP, is not the same as EnterpriseXP, which is supposed to be a web-portal to discuss ways to make XP more commercially appealing. Mark also says "One other point I'd like to clarify is that when using DSDM and XP together we do not advocate getting rid of the planning game, far from it. In fact I was delighted to see how closely the Planning Game matched the Timebox planning process I have used in DSDM projects for a number of years."



    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, November 26, 2013

    Quotes and Evolutionary Design Practices of Industrial XP


    (Originally posted 2003.Apr.27 Sun; links may have expired.)

    David Schmaltz said on the IXP mailing list: "Change never rests on the permission of the willing, but in the hearts of the brave and foolhardy." Let's hope we have supporting practices to not be too foolhardy. On the XP mailing list, Joshua wrote: "...most people react to change as if they are losing something. It's wired in to our human nature. I introduce XP into environments all the time. People think they're gonna lose something rather than gaining something with XP. I help them learn that they will be gaining a great deal."

    On the IXP mailing list, Russ Rufer has provided the list of practices of IXP's Evolutionary Design:

  • Rapid Return on Investment
  • Risk Reduction
  • Backtracking
  • Selective Automation
  • Team Intelligence
  • Walkthrough
  • Spanning System
  • Small Iterations
  • Multiplicity & Selection
  • Dead Reckoning


  • I snipped quotations from this report

    http://www.sdmagazine.com/documents/s=7928/sdmsdw3d/sdmsdw3d.html, onto the IXP mailinsg list, and no one contradicted me, so here are some stabs at defining what some of the practices may be:
    Rapid Return on Investment - Developing only what needs to be done at the moment, leaving the rest to be filled in later, allowing early releases that can prove themselves quickly.

    Risk Reduction - Striving for design simplicity is a factor for reducing risk.

    Backtracking - Stepping back to find a simpler solution to a problem. "Backtracking not only helps you to consider other alternatives, it allows you to rewrite, aggressively refactor and prune any dead code."

    Selective Automation - "Quantity bows to quality: It's not about writing tests; it's about writing good tests"

    Team Intelligence - "Developers should devote maximum attention to improving the code."

    Walkthrough - "Studying, living and breathing code is at the heart of evolutionary design"

    Spanning System - "Evolving the code from a rudimentary system that, though primitive, provides end-to-end functionality. This simple working application is a thin, vertical slice of the project that offers insight into both essential and unnecessary features. Illustrated with a simple blackjack problem. To span the system, they chose just one case with two known hands and incrementally built the system to accommodate the full deck. "

    Small Iterations - "To implement a hotel reservation system, you might first implement a program that reserves just one room before developing the whole system. These small iterations can be viewed as embryonic versions of the system, and can be taken to the customer for feedback...this is the antithesis of RAD—instead of throwing your code away, you evolve it."

    Multiplicity & Selection - "Consider a multiplicity of design and selection, like the photographer who takes 10 rolls of film to find the perfect shot. Survival of the fittest."

    Dead Reckoning - "Navigating without explicit instructions, by heading in roughly the right direction, and using feedback to make adjustments and to motivate backtracking."


    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Thursday, November 21, 2013

    What is Industrial Extreme Programmging?


    (Originally posted 2003.Apr.24 Thu; links may have expired.)

    At the BayXP meeting last night, Joshua Kerievsky, Russ Rufer, Somik Raha, and Tracy Bialik of Industrial Logic gave a presentation on their version of XP that they have developed over the last several years. They named it "Industrial Extreme Programming" (IXP). What follows here are taken from my notes. Any errors are my own.

    IXP is what Industrial Logic has been doing the past few years as they work with their clients in training and coaching XP projects. Joshua said he was concerned with recent "blendings" of XP and other methods (DSDM, FDD, Scrum, Crystal) because some of those blendings were throwing away XP's planning practices (one of the most valuable aspects of XP). Many of these blendings were for the most part untried and unproven, as well, though the unblended methods have records of success.
    IXP doesn't remove any of the core practices of XP (except Metaphor, and few teams have really felt like they successfully used XP's Metaphor practice). IXP builds on XP, adapting it for survival in larger companies, highly political companies, and large teams.

    Kent Beck defined four values of Extreme Programming, values he felt were essential... other values were good, but he wanted to emphasis four in particular. XP's values are Communication, Courage, Feedback, and Simplicity. Agile Modeling adopted those four and added Humility.

    Joshua and his team have chosen five values, which they not only want to emphasize, but insist that the absence of these values in the project or company will cause failure and unhappiness. The IXP values are: Communication, Simplicity, Learning, Quality, and Enjoyment.

    The value of Enjoyment is sometimes deemed controversial. Joshua considered Fun, and probably felt Enjoyment sounded better. People who enjoy their work are more likely to want to learn I've always said that XP requires a Learning Organization). People who enjoy their work and enjoy working together are more likely to have the teamwork that XP requires.

    Quality is "we know it when we see it." Quality products, quality code, a quality process, quality people.

    These are the original XP practices that IXP includes (more or less), sometimes with modified names and meanings: [names in brackets are the original XP names, or the names I prefer over Kent's names.]

  • Sustainable Pace
  • Planning Game [Release Planning, Iteration Planning]
  • Frequent Releases [Small Releases]
  • Refactoring [Merciless Refactoring]
  • Story Test-Driven Development [Programmer Tests, Acceptance Tests, TDD]
  • Continuous Integration
  • Pairing [Pair Programming]
  • Collective [Code] Ownership
  • Coding Standard
  • Domain-Driven Design [replaces Metaphor]
  • Evolutionary Design [replaces Simple Design?]


  • The name changes are for clarity and to expand things beyond just coding -- people can pair on other things besides code, collective ownership can extend beyond code.

    The new practices are:

  • Readiness Assessment
  • Viability Assessment
  • Project Community
  • Project Chartering
  • Test-Driven Management
  • Storytelling
  • Storytesting
  • Small Teams
  • Sitting Together
  • Continuous Learning
  • Iterative Usability
  • Retrospectives


  • Readiness Assessment answers the question "Are they able to transition to IXP?" Seehttp://www.industriallogic.com/xp/assessment.html.

    Viability Assessment answers the question "Is the project idea viable? Profitable? Feasible? Does the project have the necessary resources?"

    Project Community expands on Kent Beck's "Whole Team" concept. "People who are affected by the project and who effect it." (Hope I got that quote right.) This includes QA staff, middle and upper level managers, tech support staff, programmers, DBAs, customers, end-users, and probably marketing and sales. (Reference to David Schmaltz / True North Consulting's Project Community Forum.)

    Project Chartering provides the Vision and Mission, as well as the definition of who is in the Project Community. A light-weight exercise that seems to be necessary for clarifying the project's goals.

    Test-Driven Management requires objective measures be defined for the success of the project. External results like "support 100 users by December 2003." The Whole Team cooperates to achieve this goal. Also defines return on investment.

    Sustainable Pace. They considered renaming this to "Slack" (see the book by Tom DeMarco). An example of the value of slack is that it can provide the time for someone to write the tool needed to increase development speed -- too much focus on getting stories implemented quickly can be sub-optimal.

    Storytelling. I think Joshua separated this out from Planning Game in order to emphasize that story-telling is a natural way to get requirements (sometimes after a bit of coaxing). IXP stories are not necessarily "user-centered" stories, since they may address concerns of administrators, maintainers, etc. "A story is an excuse to have a conversation." Conversation is required to understand some stories -- a story that can't be understood can't be implemented. Five words for a story title was also mentioned.

    Storytesting. One word, to parallel Storytelling. This is defining the acceptance tests, but not writing them. IXP coaches help their clients in both Storytelling and Storytesting. Ideally, you do want "executable documentation" and they talked up Fit by Ward Cunningham - a framework that allows anyone using any editor capable of creating HTML tables to be able to specify acceptance tests. (Programmer help is still required to plug an application into Fit's acceptance test framework.)

    Planning Game. Joshua says that it is very weird that some of the hybrid methods are throwing away the planning game. This practice is so useful that many of Industrial Logic's clients, who did not adopt all of XP, did adopt the Planning Game. Still, the concept of "velocity" (work done per iteration) seems to elude some clients

    Frequent Releases - frequent end-user releases -- same as XP's practice. Enables rapid return on investment. Releasing to end-users provides opportunity for feedback, to find issues in deployment, issues raised by real live users. "Without learning, feedback does no good".

    Small Teams -- for large projects, set up networks of small teams, with their own code-bases and coding rooms. A 30-person project might consist of teams as large as ten people and as small as three. Sometimes there might be a testing team and/or refactoring team that join the each of other teams at various times and then move on. Industrial Logic practices Pair Coaching, which does not require that both coaches be together at all times. Pair Coaching does enable coaching larger projects than a single coach could cope with.

    Sitting Together -- Joshua says that the term "Open Workspace" turns some people off, but it is the same concept. He has seen a 40-person XP team in one very large room, but that's unusual. He has also seen one or more people give up the office they worked hard to get, because pairing in the same room as other people let them focus better and learn more. Sitting together / pair-programming can be done via internet collaboration, so it isn't limited to open workspaces. The gave an example of a team split in two time-zones, who decided to synchronize their hours to allow more "virtual pairing".

    Continuous Learning. I've always said that XP requires a Learning Organization, and this practice make it explicit. Examples... Study groups who are not just allowed, but encouraged to get together for three hours a week, during office hours, because they know this helps them advance in their careers. XP Coaches who assign practice drills to the programmers or QA testers. "Lunch Break" learning groups show that management doesn't care enough about their employees learning. An XP coach in Italy spends an hour a day teaching his junior programmers -- whose skills are rapidly advancing. I think an member of the audience said "If everybody isn't learning, then learning becomes a subversive activity." Joshua also said that "resume-driven-design" tends to happen because programmers are starving to learn, but not given opportunities to do so.

    Iterative Usability. The UI must be usable and tested regularly. Management-Tests should be tied into Iterative Usability. Redesign the UI as soon as feedback shows its flaws. Paper-based GUI design was also mentioned.

    Time was running out, so the remaining practices were discussed quickly...

    Evolutionary Design. Drives all design. Their tutorial has ten practices for this. (http://www.sdmagazine.com/documents/s=7928/sdmsdw3d/sdmsdw3d.html.)

    Coding Standard. Have one.

    Pairing. As per XP, but not just programmers.

    Collective Ownership. As per XP, supported by tests, pairing, etc.

    Retrospectives are a critical practice. Some clients are reluctant to get 40 people together for 2 or 3 days for a full project retrospect, but they should do it for the unexpected learnings that come from it. Also do mini-retrospectives each iteration.

    Refactoring. Early and often as per XP. Don't let "refactoring debt" accumulate.

    Domain Driven Design. Even though never officially a part of XP, it has been done by every good XP programmer that Joshua knows. The Model objects are kept separate from the rest of the code (GUI, etc.) The acceptance tests normally operate on the model objects, skipping the GUI. See the book on this subject at http://domainlanguage.com/. See also Erik Evan's "Ubiquitous Language".

    Story-Test-Driven-Development. First write a failing acceptance tests. Then use the TDD cycle (failing programmer test, code to make programmer test pass, refactor) until the acceptance test passes. This is "top-down" TDD, and it best avoids writing unnecessary code.

    Continuous Integration. As per XP.

    See http://www.industrialxp.org/ for more information. Check out these papers, too: http://industriallogic.com/papers/index.html



    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, November 19, 2013

    Against Command And Control


    (Originally posted 2003.Apr.23 Wed; links may have expired.)

    Dale writes in "Dale Emery, Bureaucrat" that his department was being changed from serving others to ruling others.

    I think this increase in command and control is a recent trend in the industry, a fear reaction to the current economic climate. But remember: "The more you tighten your grasp, the more star systems will slip through your fingers." -- Princess Leia, Star Wars.

    Hmm. I suppose a quote from a fictional character isn't the most effective. How about this: "If you are distressed by anything external, the pain is not due to the thing itself but to your own estimate of it; and this you have the power to revoke at any moment." -- Marcus Aelius Aurelius (121-180 AD), Roman emperor. And this: "An intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Hoshang N. Akhtar

    I'm going to have to read more about Deming. These are his "14 points":


  • 1. Create constancy toward improvement
  • 2. Adopt new philosophy
  • 3. Cease dependence on inspection for quality
  • 4. Minimize cost
  • 5. Improve constantly
  • 6. Institute job training
  • 7. Institute leadership
  • 8. Drive out fear
  • 9. Break down barriers
  • 10. Eliminate slogans
  • 11. Eliminate management by objective
  • 12. Right to pride of workmanship
  • 13. Institute self improvement program
  • 14. Accomplish transformation


  • Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Sunday, November 17, 2013

    More on Reset, Encapsulation, Value and Immutable Objects

    (Originally posted 2003.Apr.19 Sat; links may have expired.)


    A reader suggests that I could use reference counted smart pointers to avoid problems I described previously.

    That would not fix the problem of violating encapsulation -- retaining one object's member data in multiple other objects. In fact, in this application, if we were using boost::shared_ptr or our own reference counted smart pointer, and did the 'delete'/'new' approach, the result would be multiple "platform independent document objects" in a program designed to have only one document object. The various distinct views of the document would get out of synch. (I do use boost::shared_ptr in my application, to enable passing around large image objects among image processing functions as if they were Value objects -- I don't have to worry about premature deallocation.)

    The same reader suggests that a Reset method isn't that bad... He writes "Functional requirements for cleaning self up logically belong in the object, not in delete/new."

    I would say that in C++, the requirement for an object cleaning self up belongs in the destructor, by definition of "destructor". Whether the coder does the same cleaning up in Reset is up to the coder.
    Probably the real reason for my dislike of Reset is that some coders using it seem to have confused "variables" with "objects". You reset a variable. You create and delete objects. In the application I was talking about, the object has effectively become a global variable, with all the problems that globals have, even though only member variables are being used.

    In some ways it is even worse, because these variables are actually pointers to a global object: those pointers can become dangling pointers if the object is deleted by what is supposed to be its sole owner. Using Reset hides the fact that this is a global... better to make it a real global variable, to avoid the dangling pointer problem, or not pass it around at all (which is what LoD recommends). The application I was describing is a single-document application, so the MFC document object is effectively a global variable/global object.

    My other point about Reset is that Value objects don't need it, and Immutable objects can't have it.
    Imagine a Dimension object. In Java or Smalltalk, you might want to make it immutable, so you can safely return Dimension member values without making copies. This assumes that you don't do lots of math on Dimension objects -- because that would require making copies. It is a choice of which is more efficient, and/or safer.

    In C++, I would implement Dimension as a Value object - one that implements the copy constructor and the assignment operator (and default constructor for STL compatibility). Returning this kind of object "by value" automatically makes a copy. If you want to reset a Dimension variable, just assign Dimension(0,0) to that variable. You minimize the number of methods to write, and you make it very clear what you're doing.


    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Thursday, November 14, 2013

    Law of Demeter and Encapsulation


    (Originally posted 2003.Apr.18 Fri; links may have expired.)

    The Law of Demeter (LoD) is a heuristic for good object encapsulation. Ignore the Law of Demeter (and other advice on encapsulation), and you'll find yourself in a debugging and refactoring hell. Applied too strictly, LoD forbids container objects and external iterators (there may be a loop-hole for that, which I'll get back to later).

    Yesterday, I wanted to eliminate a "reset" function in one object, and instead have its owning object delete and re-create that object. That should be straightforward, but it didn't work because encapsulation and the Law of Demeter were being violated.

    The problem here was that a pointer to member variable of this "reset-able" object was being passed around and retained by other objects, whose lifetimes are longer-lived than that member variable would be if I implemented the delete and re-create strategy.

    I've trained myself to never give out a member variable's address, just like a TV network never gives out a TV star's address. It's dangerous. It causes the program to be more brittle. It violates encapsulation. You never know what those obsessive fans or programmers might do. Other objects become too dependent on the internal state of another object -- changing the internals of an object becomes difficult. Polymorphism is overly restricted because any "replacement" class must also give out a member variable's address, and that variable's type must be compatible. As I said, I trained myself, but sometimes I forget, and I'm working with other people who sometimes also forget about this danger.

    In C++, an alternative to returning a pointer to a member object is returning a reference to a member object, but that turns out to be just as bad. I'll illustrate with some code.

    
    
    Pointer* ptr = obj.GetMemberAddress();
    delete obj;
    // ptr is now "dangling" - pointing to deleted memory.
    
    Reference& ref = obj.GetMemberReference();
    delete obj;
    // ref is now "dangling" - it also points to deleted memory.
    


    No one would write code that obviously bad, but put in an event-loop, lots of intervening functions and other objects, and maybe some threading, and the same thing can happen without anyone realizing it until the crash occurs.

    You could make a rule to never retain such a pointer or reference... that is an improvement (as long as no one breaks the rule), but it is awkward. It also requires that you never pass the pointer or reference into functions. It's too easy forget where the pointer came from, and create a persistent object holding that pointer. And that dangerous "persistence" include multi-threading as well as objects - a thread's lifetime is even less predictable than an object's, and it becomes much harder to diagnose dangling pointer problems in multi-threading programs.

    So LoD forbids this:

    
    
    memberPtr = obj.GetMemberAddress();
    memberPtr->DoSomething(); // potentially changing obj's member state.
    


    It also forbids this:

    
    
    obj.GetMemberAddress()->DoSomething();
    


    What you should do, is either return a copy of the member, and the copy's lifetime is no longer under the control of "obj", or incorporate DoSomething() into the API of "obj".

    So we can write:

    
    
    memberValue = obj.GetMemberValue(); // returns copy
    memberValue.DoSomething(); // doesn't affect obj's original member state.
    


    We can even write:

    
    
    obj.GetMemberValue().DoSomething();
    


    because DoSomething is operating on a copy. NOTE: if you write this sequence of calls more than once, XP requires that you remove this duplication, most likely by incorporating DoSomething into the API of "obj".

    Returning a copy is particularly useful for 'basic' types like String, Date, and so on. The safe programmer will return a copy of a string or date member variable, so that callers can not change the state of the member variable "behind the owner's back'".

    Some of the more rabid fans of the Law of Demeter say that even this operating on a copy is too fragile, and you really should do this:

    
    
    obj.DoSomething();
    


    The danger of over-applying this idea is that your object interfaces get really fat. You really don't want to re-implement all of the member functions of String for each of the String members in your EmployeeData class just because you think LoD tells you to. Because of this, I think of the "Law" as more of a "Recommendation".

    I assert that immutable objects are an exception to the Law of Demeter.

    Java's String class is immutable (once the object is created, it can't be changed), so Java programmers don't have to make copies of String member variables in their accessor functions.

    Some people have recommended declaring mutable and immutable interfaces, declaring the mutable object to implement both of those interfaces, and declaring this "accessor" function's return type be just the immutable interface, so that you can return a mutable object through that immutable interface. Of course, a programmer could "down-cast" back to the mutable type, but then all sorts of bad things can be done if you work at it. Probably better to create a copy of the object to avoid the down-casting trick. And, in languages like Smalltalk and Python, you don't have variable and function type declarations to make this immutable interface idea work (though you could create and return an Immutable Adapter to enclose your mutable member.

    And what about that loophole for containers and external iterators?

    LoD says you can't return references or pointer to your object's own member data, but containers are given data to hold, and so they can return that data.

    External Iterators are new objects, not member objects, created when you call the a function that returns the iterator.

    So what am I going to do about my hard-to-modify program?

    Well, changing it to conform to LoD is going to be at least a day's worth of work. And we violated LoD on purpose, though now I regret that decision. We have a MFC Document object that owns a platform-independent "document" object. We pass the MFC Document object to other MFC classes, and pass the platform-independent "document" object around to the rest of our code. But we're not consistent about that.

    To make this conform to LoD, the platform-independent object must never be passed around to other objects at all -- everywhere we currently do this, we should be passing around that MFC Document object instead. That means that the complete interface of platform-independent "document" object must be implemented in the MFC Document object, delegating to the member object. However, the "type" we pass into various part of our program doesn't have to always be the MFC Document type, it can be a base-class type -- the platform-independent "document" interface -- we just have to declare the MFC Document type to subclass from that interface.

    Then, and only then, could I have the MFC Document object have full control over the lifetime of its member objects (and even then, I have to be careful about threads - I can't delete an object if another thread is still using it.)

    And why would I want to eliminate a "reset" member function and instead delete/re-create the object? Because it's too easy for a reset member function to forget to clean up all of its state. I object to "reset" functions generally, for both small objects and large objects like documents. For small objects, I prefer immutable objects that I can easily recreate on demand, because then copies don't have to made in accessor methods, and immutable objects are more easily made multi-thread safe.

    For future thinking: how can LoD work with threads? Can we think of the thread as an object? Some platforms do.

    Here's a "formal" version of LoD: A method "M" of an object "O" should invoke only the the methods of the following kinds of objects:

  • itself
  • its direct member objects
  • its parameters
  • any objects it creates/instantiates
  • [my extension] any objects created but not retained by a member object or parameter


  • Here's the informal version attributed to Peter Van Rooijen:

  • You can play with yourself.
  • You can play with your own toys (but you can't take them apart).
  • You can play with toys that were given to you.
  • And you can play with toys you've made yourself.


  • See http://c2.com/cgi/wiki?LawOfDemeter for more discussion on this topic.

    See http://www.ccs.neu.edu/home/lieber/LoD.html for a list of LoD links.



    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, November 12, 2013

    Frequent Releases to End User

    (Originally posted 2003.Apr.14 Mon; links may have expired.)


    One of the best books on Extreme Programming is Planning Extreme Programming by Kent Beck and Martin Fowler. It doesn't tell the customer how to gather requirements, but it gives lots of advice on how to do XP from the customer side... particularly Release Planning and Iteration Planning, and why, of four variables (Time, Scope, Resources, and Quality), he suggests only letting Scope vary.


    Kent Beck's prose is highly factored -- he tends to say things once and only once -- so you have to read carefully. He doesn't explicitly define what a "release" is, but there are clues that he really does mean releasing to the end-user... when he talks about releasing to internal testing, or to the Customer (standing in for the end-user), he uses the phrase "interim, internal release". Here are few quotes from the book about how often to release:
    Often the dates for a project come from outside the company: 
    * The date on the contract. * COMDEX * When the vc money runs out. 
    Even if the date is of the next release is internally generated, it will be set for business reasons. You want to release often to stay ahead of your competition, but if you release too often, you won't ever had enough new functionality to merit a press release, a new round of sales calls, or champagne for the programmers. (page 40) 
    Short Releases 
    Sometimes you can release much more often, maybe every iteration[....] However there is danger to never having "a release". The customer may lose strategic vision of where the software needs to go. (page 79) 
    Long Releases 
    What happens if you can only release once a year? [...] Another case is shrink-wrap software [...] look for a way to send intermediate releases to those customers that may be more interested in these versions. Call them service packs or something[....] Frequent releases are good, but if you can't release frequently you don't have to abandon XP completely. You may need to create interim releases that are only available internally. (page 80) [Bold emphasis is mine.]




    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Thursday, November 7, 2013

    Proposal: an Information Radiator for Feature Requests

    (Originally posted 2003.Apr.13 Sun)


    Consider tracking feature requests using a cork board... (one cork board per product) each time a feature request comes in, compare it to what's already on the cork board. If it's not there, find a place on the cork board for it, write it on a sticky note or index card, and post it. If it's already there, add a hashmark to indicate multiple requests for that feature.

    At some point, start grouping the feature requests that are on the cork board... this set of 9 feature requests can be grouped under "scriptability", this set can be grouped under "import/export", this group can be grouped under XX or YY.

    Imagine this cork board visible to everyone at the company... other people could add cards to the cork board, because they see a missing area. The VP could add a comment to the XX grouping, saying that these features are not valuable to the more profitable customer base they want to target, only those laggards that haven't upgraded to the latest product version. Customer Support representatives could add to the cork board those off-hand comments said by end-users that they didn't think important enough to file as a formal feature requests.

    Imagine being able to see patterns and order emerge out of the chaos of individual feature requests... someone sees this and says "I have an idea for a new product line!"


    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, November 5, 2013

    Bits and Pieces

    (Originally posted 2003.Apr.12 Sat; links may have expired.)


    Yesterday afternoon, the Bay Area weather was so nice that I dragged myself to work with some effort. I had previously signed up and worked on a story that I knew I could finish off by the end of the day, and I didn't want to not finish it and loose the three story points from my team's velocity. So I do the hard thing and get the work done. And today, Saturday, it is raining. Bleh.

    I'm sure that the weekly traffic patterns affect the weather -- all that automobile pollution keeps the area warmer than normal, and then on Saturday the cold and rain comes in because there's little traffic. Or something like that. I've read that California gets some of the pollution generated by China, so maybe Beijing's weekly traffic/pollution patterns are at fault.

    Another bit on agile requirements... XP doesn't say how to gather requirements. RUP does. I attended one of the presentations by Dan Rawsthorne of Net Objectives, a presentation called Comparing RUP, XP, and Scrum: Mixing a Process Cocktail for Your Team (Check that link for their slides in PDF form). Dan recommends combining RUP's Use Case creation techniques, with Net Objective's own "Ever Unfolding Story" technique, to create stories for XP's release plan/iteration plan. He says that a typical Use Case can create up to forty XP stories. Maybe ten of the stories are the "core" of the Use Case and can be delivered in one or two iterations. Another ten may be important for the final release but of lower priority. The other twenty stories may not need to be implemented at all -- but a typical heavyweight RUP project probably would have implemented them. If you have question about this or the "Ever Unfolding Story", please talk to Dan at www.netobjectives.com... I'm just reporting here.
    I do want Dan to correct one of the slides... the one about XP and the "business levels". The slide about RUP shows RUP "touching" the business level at "kickoff", "delivery 1.0", "delivery 2.0", and so on... Dan asserted that XP doesn't "touch" the business level, and drew the slide on page 22 of the PDF file that way. That's WRONG. XP's "Small Releases" correspond exactly to the RUP process slide's "delivery 1.0" and "delivery 2.0". Those releases every three or six months are at the business level, to get feedback from actual users.

    I also attended a one-day class on Project Management recently. The simulated project was waterfall, but the word "waterfall" was never mentioned. There was no mention of the possibilities for iteration, incremental development, or even feedback. I don't think any of those terms were in the vocabulary taught as part of class. The instructor had never even heard of Rational Unified Process ("Rational Who?"), much less Extreme Programming, Scrum, Feature Driven Development, etc. That's scary.

    So what did I learn in this project management class? Well, the project process has Initiating, Planning, Executing, Controlling, and Closing. No mention of maintenance. Not much about people issues.

    Planning included the "Work Breakdown Structure" (a rather unfortunately overly task-oriented breakdown of features that does have some resemblance to XP's "Release Plan", sorta.) We saw a "Responsibility Matrix" correlating tasks with "do, review, and approve" columns. We learned about finding the Critical Path on a Gantt Chart. A little bit of Risk Management and mitigation planning. And of course, "resource" allocation and management. [I'm having a vision ala Solyent Green: "Resources are people!"]

    The "trade-off triangle" of Time, Cost, and Scope/Requirements was mentioned. No mention of the zeroth law of software engineering: "If you do not care about quality, you can meet any other requirement."

    We learned a lot of vocabulary, with emphasis on "crashing" (adding resources to decrease the project schedule -- hmm -- like getting nine women to have have a baby in one month? No mention of Brook's law.) and "fast-tracking" (defined as "compressing the schedule by overlapping activities normally performed in sequence").

    My hand-out from this one-day class says "the PMBOK is the standard knowledge source for project management," and lists a bunch of books. Some of the books I've heard of -- Critical Chain by Goldratt, some of them I've read -- The Deadline by DeMarco (I wish he could go back and write an "agile" second edition or sequel), some of them sound good Customer-Driven Project Management by Barkley & Saylor, and some of them are just scary -- The Complete Idiot's Guide to Project Management.



    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Thursday, October 31, 2013

    Agile Writing


    (Originally posted 2003.Apr.07 Mon; links may have expired.)

    A writing collaboration is taking place in the Agile Modeling mailing list. Scott Ambler posted a draft of a paper he's writing, about the rights and responsibilities of stakeholders and developers.

    Scott: So, I'd appreciate any feedback that you might have. Thanks in advance.

    Ron Jeffries: Here it is. It was wise to thank me in advance, because in a few minutes you may no longer feel that way.

    Anne & Larry Brunelle's idea of no longer separating developers from the other stakeholders has resulted in a new paradigm. Ron's combined-rights draft has phrases like:



  • You have the right and responsibility to show and to observe progress in a running system, proven to work by repeatable tests specified by customers and programmers alike.



  • You have the right and responsibility to know what is needed, with clear declarations of priority. You have the right and responsibility to contribute as much as you can to determining needs and priorities.



  • You have the right and responsibility to ask for and receive help from everyone on the project, peers, superiors, subordinates, programmers, or customers. You have the responsibility to give help when it is asked for.


  • Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, October 29, 2013

    Incremental Requirements

    (Originally posted 2003.Apr.05 Sat; links may have expired.)


    The great thing about agile projects, is that you don't have to have ALL the requirements up-front, just enough to get started. Once you see the software working (minimally), you can change your mind about the requirements that are not yet implemented or have been implemented.

    Consider a non-XP project, one that is not incrementally implementing features. If the customer decides in the middle of the project to change some requirements, the project may have to throw away lots of partially-completed code.

    In the middle of an XP project, a requirement is either implemented or not -- the only time a requirement may be partially implemented is during the two-week iteration it was schedule for. Any requirements that haven't been implemented yet can be changed at zero cost. A change for requirements that have already been implemented is essentially the same as adding a new requirement -- it will cost something to be re-implement, and an equivalent amount of other non-implemented requirements should be dropped to keep the project on schedule.

    Developers are the genie in the magic lamp - delivering any features that the customer wishes - so long as the customer is willing to pay the price (time and money). With this power comes rights and responsibilities.

    In Extreme Programming, rights and responsibilities are divided between "business" and "developers". The "business" side has the business analysts, the QA testers, domain experts, users, stakeholders, and so on. All wrapped up in the single word "Customer". They are all involved in defining, specifying, and creating tests for the requirements. If you have business analysts in your team, it is the job of business analysts to help the stakeholder come up with requirements.

    The "developer" side has the programmers, DBA, and so on. They translate requirements into code, databases, web servers, and so on. User interface experts may be either side, depending on whether they do programming as well as user interface design.

    The business side is responsible for the requirements, the wishes they make of the genie, because they are responsible for running their business. To avoid this responsibility is to let the developers run your business. Unless the business is making tools for other software developers, it's unlikely that the developers have the expertise to satisfy your customers and make a profit.

    The responsibilities of the two sides are clearly divided to reduce risks: if the business guys (who are not programmers) start telling the developers how to do programming, they risk technical failure. If the programmers tell the business guys what features should be implemented, there is a risk that project will not meet business needs.

    When talking about what to ask the genie in the magic lamp, I've never seen anyone with a lack of ideas. They may not know HOW to get what they want, but they always know enough of what they want to start a conversation. Like the following:


  • "Wouldn't it be great if we could do x?"
  • "Could you make it so that y happens when I do z?"
  • "I want this..." (drawing on whiteboard).
  • "Our customers are doing this manually, and we want to sell them software that does it for them."
  • "This is what I do all day, I want it simpler and faster."
  • "Our competitor has product x with feature y, we need something like that."

  • In XP a requirement is expressed as one or more stories. I've heard that one UML Use Case, RUP style, can translate into 40 XP stories, of which 20 stories could be optional. An XP story is a promise for future conversation about the details of that requirement. You don't have to think them up all at once, you don't have to have all the details up-front, you don't have to write up a thick document. You do talk with the developers, other stakeholders, and business analysts if you have them.

    Recommended reading:

    Exploring Requirements: Quality Before Design by Gause and Weinberg

    Planning Extreme Programming by Beck and Fowler



    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Thursday, October 24, 2013

    Leadership is not Herding


    (Originally posted 2003.Apr.04 Fri; links may have expired.)

    I don't like hearing people refer to managing programmers as "herding cats". Probably because I am a programmer and I have cats (for a short time, living in a really nice house in a suburb in Texas, my wife and I had five cats).

    It is easy to lead cats... in fact, when I'm trying to take a picture of one, it's hard to keep him from following me around. Trying to make cats go somewhere, without leading them, can be very hard, if you don't consider what motivates them.

    So when I hear someone comparing programmer management as "herding cats", it makes me think that they're trying to make them go somewhere, not leading them.

    A particularly good book on project leadership is Powerful Project Leadership by Wayne Strider. (Order direct or via amazon). It's good enough that I just bought a second copy for lending to others. This book's first lessons in leadership are to help you become aware about yourself, others, and your shared context. In fact, the book is divided into three divisions: leading yourself, leading others, shaping your project's context.

    Check out a sample of Wayne's writing here: Leading Projects in Stressful and Chaotic Situations.



    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, October 22, 2013

    Summary of today's news from Apple


    New versions of iPhoto, iMovie, GarageBand, Pages, Keynote, Numbers, are free with new hardware. Also new collaborative remote "cloud" editing in some (or all?) of these apps.

    "If you’ve recently purchased a Mac that did not include the latest versions of Pages, Numbers, and Keynote, you may be eligible to download these apps for free."

    New MacBookPro with Retina displays

    • 2560 x 1600 pixels 13-inch $1299 
    • 2880 x 1800 pixels 15-inch $1999
    • 2.4 GHz dual core processors up to 2.3 GHz quad core processors
    • 4 GB RAM up to 16 GB RAM

    "flash" storage (solid-state pseudo hard disk) 128 GB, 256 GB, 512 GB
    says "configurable" to 512 GB or 1 TB "flash storage" (3rd party upgrades?)

    Non-Retina ipads (large and mini) = 1024 x 768 pixels
    iPad retina (large size and mini size) = 2048 x 1536 pixels

    Original iPad mini

    • 7.9-inch display, 32-bit cpu
    • original iPad mini with wifi at $299,
    • original iPad mini with cellular-data & wifi at $429,

    iPad mini Retina

    • 7.9-inch display, 64-bit cpu, 
    • iPad mini Retina at $399 with wifi,
    • iPad mini Retina at $399 with cellular-data & wifi $529,
    • iPad mini Retina memory available: 16 GB, 32 GB, 64 GB, 128 GB
    • maxed out 128 GB cell + wifi = $829

    Original iPad 2

    • 9.7-inch display, 32-bit cpu.
    • oiginal iPad 2 with wifi $399,
    • iPad 2 with cellular-data & wifi $529 (this is maxed-out model.)
    • iPad 2 memory available: ONLY 16 GB 

    iPad Air

    • 9.7-inch display, 64-bit cpu.
    • iPad Air has Retina display weights one pound, smaller bezel, 64-bit processor, faster wifi.
    • iPad Air with wifi at $499
    • iPad Air with cellular-data & wifi at $629
    • iPad Air memory available: 16 GB, 32 GB, 64 GB, 128 GB.
    • maxed-out 128 MB cell + wifi iPad Air = $929


    Training? We don't need no stinking training!

    (Originally posted 2003.Apr.03 Thu)


    A manager told me the other day that his annual budget for training was $800. For him and the four or five people under him.

    With most classes and conferences costing over $1200, there's no way that he or anyone working for him will get ANY training this year.

    What's up with that?


    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Thursday, October 17, 2013

    Reuse and Modularity

    (Originally posted 2003.Apr.02 Wed; links may have expired.)


    TWELVE|71 wrote a bit about reuse and modularity. "Modularity is the idea of having a tire that can be swapped on and off without affecting the car. Reuse is the idea that we can take a tire from an old car and use it without thought on a new car."

    Off the top of my head, I think software reuse projects often fail because (1) the needs of all the client projects are not considered when making the 'reusable' module, (2) the module is not documented well enough, (3) people on the client projects are motivated to not reuse modules, perhaps because they are rewarded for hours worked or lines of code written (or some other counter-productive measure), or (4) they are under time pressure, and don't have time to rewrite their app to use a module that they seem to be able to do without.

    Take the reverse approach. Instead of creating a module and telling projects to use it, create a small "Extreme Reuse" team: one to four people who join projects to (1) help them get things done (2) look for code that could be extracted for use by other teams (3) refactor project code to create and reuse shared parts. The Extreme Reuse team needs to join several projects before extracting code from any of them, in order to know all of their needs. How can a few people join a team and be immediately helpful and productive? Pair programming.

    Bryan Dollery has described this and other aspects of how to reuse parts here.

    I don't know anyone trying a "Reuse Team"; please let me know if you are. My small team works on several projects. We have code common to these projects in some separate directories, and it is mostly unit-tested. (Yes, we're not 100% pure XP.) We make minor changes to this common code, and avoid breaking the clients by continuing to pass the unit tests, and by building and testing the client projects.

    Keith Ray is developing new applications for iOS® and Macintosh®.. Go to Sizeography and Upstart Technology to join our mailing lists and see more about our products.

    Tuesday, October 15, 2013

    Speaking of Objects as cooperating, independent, agents...

    (Originally posted 2003.Apr.01 Tue; links may have expired.)


    Chris Uppal, on the Dolphin Smalltalk newsgroup, writes about his "Great Leap Forward from Java to Dolphin Smalltalk." He describes the big difference...

    If you are like me, then you are currently thinking of a big difference between Smalltalk and Java being that Java stores code in files, whereas Smalltalk keeps it in the image. That's sort of true, and I'll get back to it, but, for a minute, just forget about code, it's not important (really!). What matters is objects. 
    The image is the place where the objects live. Technically, the image is a garbage-collected heap that can be saved to file, and later restored, thus saving and resuming the state of a computation. Technically that's true, but it isn't at all a helpful way to think about it. A more organic metaphor works much better. I think of the image as a deep, murky, pond where objects move around in the depths like fish. It's an important part of the metaphor that the objects are independent of me. Even if I designed and wrote the classes, once an object has been created it has an independent existence. I can talk to it, I can ask it to perform operations, but it is separate from me. In a sense it is "my" object, but it is only "mine" in the same way that a pet, or a rose bush, or a table, could be "mine".
    The image is where the objects live. Not the code, the objects. We'll get back to the code in due course, but not yet. The Smalltalk environment is just a place where you can talk to objects; no more, no less. Oh, sure its got class browsers, debuggers, editors, etc, but that's all tinsel. What matters is that it is a place where you can interact with the objects.
    I'll get back to the "tinsel" later too, but for now, I want to talk about the one part of the environment that isn't just a productivity aid: the workspaces. Workspaces are the medium through which you talk to objects. You can describe workspaces as "containing snippets of code" which you execute, but IMO that's exactly the wrong way to think of it. A better picture (slightly tongue-in-cheek) is as a kind of singles bar, where you can meet objects, talk to them, check them out, get to know them. Each workspace has a number of objects that are (temporarily) living there; they are the values of the variables in the workspace. In most cases they'll die when you close the workspace, but until you do they'll survive and you can talk to them. I keep some workspaces hanging around for days if they contain objects that are important for what I'm doing. The way you "talk" is by sending messages written in the Smalltalk programming language, but that's almost incidental. The important thing is that you are communicating with them using an interactive text-based medium, like using an IRC [chat] channel. [...]
    Another way of interacting with objects is to use the Inspector(s). They give you a much more nuts-and-bolts, low-level, view of the object -- a more intimate view, if you like. I, personally, don't think that the Smalltalk world has yet woken up to what inspectors could be, but the current implementations (like "flipper" in Dolphin) do at least allow you to see inside the objects.
    I wish C++ had inspectors... but C++ throws away a lot of information when you compile the code. In the crappy debuggers that C++ programmers have to live with, we often can't view the run-time contents of an object properly and easily. The VC++ debugger for example, should be able to know that the object pointed to by a base-class pointer is actually an instance of a derived class, but it doesn't display the member variables that belong to the derived class -- just the member variables of the base class.


    An image will contain many objects, some long lived (living, perhaps, for decades), most very short lived indeed. Some will be simple or trivial, like Strings and Points. Others will have complicated internal structures, and/or complicated behaviour. But they are all objects, and they all live in the image, and you talk to them in workspaces.


    A Squeak Smalltalk image contains objects that have been "alive" since 1984 or earlier, because Squeak was derived from a Xerox/Apple implementation of Smalltalk-80. The objects in the image "sleep" when saved to disk, and awaken when restored from disk. This means that some objects in Squeak have been alive longer than some programmers have been.



    Classes are one particularly interesting kind of object. Remember I'm still not talking about code (that comes later), I'm talking about the objects called classes. Just like any other objects, you can invite them to join you in a workspace:
    [I modified the code slightly here...]


    aclass := String.  
    and then you can use the magical Smalltalk object-oriented IRC to talk to them:
    aclass name. "--> #String" aclass allSubclasses size. "--> 3"
    and so on. So classes are objects, and they live in the image.

    [...]
    Code is how we tell objects how to behave. It's text in the Smalltalk programming language. We're programmers so we care about code; when we wrote the tools for looking at objects, we naturally designed the tools so that we could also see the associated source code. For instance our special tool for looking at classes (the class hierarchy browser) allows us to see the source of the methods, to change the source and recompile, etc. That's natural for us as programmers. If we weren't programmers then we'd want different tools, and we'd be interested in talking to different objects. Such systems, built for non-programmers, are called "applications", but they are still just Smalltalk -- tools for talking to objects that live in an image. (A big difference is that the "image" of an application is typically not persistent, unlike the image of the IDE).
    Back to code. Granted that the most important thing is the objects and how they behave, we still do care about the code. We want to organise it, back it up, put it under source code control, etc. A class is an object that lives in the image, but the source code for that class is something else. For all sorts of reasons, we want to keep that outside the image. The way that Dolphin organises source-code is via Packages. A package is a collection of the source code for classes and methods (and a few other things too, which don't matter here) that is kept outside the image in one or more files. You can load the package into the image, which will create an actual Package object, and class objects corresponding to the source-code. Or you can "uninstall" the package, which really means killing the Package object and the Class objects.
    So a package is just a way of collecting related source-code together. [...] The package mechanism is relatively simple; it could be improved, but I find it adequate for my needs. Package files are text files, you can edit them with vi, or notepad, or whatever. Occasionally I do that if I want to make particularly sweeping changes to the source. Of course, if you do that then you have to install the changed version into the image before it'll do anything useful.
    Notice how very different this way of thinking is from the way that even the best Java IDEs encourage you to think. When I started out in Smalltalk I was thinking of the IDE as if it was a Java IDE. I though of it as a tool that allowed me to write code, and had features to allow me to browse and test the code. After a year or so I realised that I'd turned the picture upside down completely, and in the process had revised my conception of what Object-Oriented programming is all about. As a Java (or C++) programmer I had pretty much thought my .java (and .cpp) files were the classes, and I thought that creating classes was what programming was about. I now think of the objects as being the important thing, and the classes as very secondary, hardly more than an implementation detail.
    feel that that has made me a better programmer. Of course it's not possible to know for sure, but if it has, then it all comes down to Smalltalk's workspaces...

    The original message can be found here


    Thursday, October 10, 2013

    Chaos, Order, and Software Development

    (Originally posted 2003.Mar.31 Mon; links may have expired.)

    Kevin Kelly published Out Of Control: The New Biology of Machines, Social Systems, and the Economic World in 1994.

    Jim Highsmith (James A. Highsmith III), wrote Adaptive Software Development: A Collaborative Approach to Managing Complex Systems in published in 2000, before he read Extreme Programming Explained: Embrace Change by Kent Beck, published in 1999.

    The book on Scrum was published in 2001, and Highsmith's Agile Software Development Ecosystems was published in 2002.

    What do these books have in common? Order (or the semblance of purpose) emerging from independent agents in situation that one might expected to be purely chaotic.

    Kevin Kelly's book covers the most ground, of course, from bee hives and ant colonies, boot-strapping ecosystems, competing/cooperating agents within our brains, distributed control within robots, evolution and genetics, genetic algorithms, and so on. The theme is that order can arise "by itself" (it emerges, rather being designed up-front.)

    The agile software authors are saying that a good software product can arise with a minimum of planning up-front. However, creating this order is not 'random', it arises from the constant thinking and re-thinking of the people involved through the life of the project.

    Some people have objected to a lack of up-front planning or designing, making the analogy to a "hill-climbing" algorithm that gets stuck on a local maxima (getting stuck on a small hill, when the goal is a larger hill, a valley away). The difference of course, is that the hill-climbing algorithm is stupid, whereas many people are smart. People can see the big picture, and can do a little planning of refactoring to get from the current design to the desired design, even though the refactorings may temporarily go through a poor design along the way (but with all tests still passing!)

    In a recent web-search, I came upon a paper written by a member of a group researching independent software agents. The paper was about how Extreme Programming is helping them write software successfully, allowing new programmers to become productive members of the team quickly, and how the process allows them to increase software re-use. That paper is here: Using Extreme Programming for Knowledge Transfer. Using XP to continue research like those described by Kevin Kelly... I don't have my copy of XPExplained or Adaptive Software Development handy, but I would expect Kelly's book, or books he references, to be referenced in the bibliography of Kent's and Highsmith's books. I enjoy feedback loops like that.

    Tuesday, October 8, 2013

    Daily Standup Meetings


    (Originally posted 2003.Mar.29 Sat; links may have expired.)

    Laurent Bossavit writes: I regularly hear from people who have experimented with daily meetings such as Scrum ("Daily Scrum") or XP "Stand-up meeting") recommend. With no exceptions, everyone says that such meetings are incredibly effective in getting issues solved quickly, gathering momentum within the team, etc. My experience is about the same, although I prefer brief, informal "huddles" to formal meetings.

    With a three person team, I tried daily standup meetings, but didn't find them that useful, since we were co-located anyway. An attempt at doing combined standup meetings with two unrelated teams was even less useful. It seems that having a common goal and related work makes daily meetings productive. That's probably why weekly "status meetings" of unrelated teams reporting to one manager are very unproductive. Like Laurent, I prefer brief huddles whenever information needs to be shared.

    Thursday, October 3, 2013

    Do the Hard Thing

    (Originally posted 2003.Mar.28 Fri; links may have expired.)


    Watching the TV show Boston Public, Principal Harper was a telling a student something like "when you have a choice, pick the hard choice. Nine times out of ten, it will be the right one." That rings true.

    In developing software, writing tests is hard, so XP does it all the time. Design is hard to get right, so XP does it all the time. Communicating requirements is hard, so XP does it all the time (by talking to the person playing the role of Customer.) By doing these things all the time, we make them easy.

    In corporations, telling someone the truth can be hard; standing up against peer and management pressure to avoid reality can be hard. Doing the right thing, tactfully, is hard, but more rewarding than living in the hell caused by ignoring reality.

    Tuesday, October 1, 2013

    Please Ignore the Elephant in Your Living Room

    (Originally posted 2003.Mar.27 Thu; links may have expired.)

    The subject of legacy code appears on the Test Driven Development mailing list periodically. My advice is to use test driven development for writing new code or bug fixes, and leave the rest of the legacy code alone, writing tests for old code only when you need the support for refactoring.


    David Brady, on that mailing list, writes: "being behind schedule with 250,000 lines of monolithic, untested, difficult-to-test code is the PERFECT time to start learning how to test. You just have to ease into it one step at a time and be prepared for a long journey." My emphasis on the "one step at a time."

    Thursday, September 26, 2013

    Technical Reviews, More on Test Driven Design

    (Originally posted 2003.Mar.26 Wed; links may have expired.)


    Scott's essay on TDD is up here and Ron Jeffries's critique is there. I'll have more to say about it after reading it a couple of times.

    That recent issue of STQE Magazine also has a great short essay by Jerry Weinberg on technical reviews being a learning accelerator. One thing I want to point out is that junior programmers should be reviewing the work of master programmers, not necessarily to find errors, but to learn from the master - and of course, master programmers can make mistakes too, which are often visible to junior programmers as well as other master programmers. If the master programmer is humble, he/she can learn from a junior programmer, too.

    I've been reading Weinberg and Freedman's book on the subject of technical reviews Handbook of Walkthroughs, Inspections and Technical Reviews (which was written in FAQ style - question/answer), and was a bit surprised by their recommendation that when people are being trained on how to technical (code) reviews, they should have some practice at conducting a review in the presence of hidden agendas.

    Some examples of hidden agendas in code reviews: person A wants to impress person B. Person B wants to make person C look bad. Person C needs to go to the restroom, but doesn't want to say so. Person D is distracted by illness of his/her spouse.

    Only Weinberg would write about hidden agendas in code reviews - too many writers and books on software development practices seem to assume that people act like machines.

    On Weinberg's SHAPE forum, Charlie Adams wrote: "When people are getting tense about their software being reviewed, use Jerry's phrase, 'Yes, I trust your honesty, but I don't trust your infallibility. I don't trust anyone's infallibility.' (QSM 4: page 220) In my experience this has always calmed the atmosphere and allowed us to examine the code rather than the developer."

    While I have done code reviews, both informal and formal, I prefer pair programming. It combines reviews with collaborative design, testing, and coding. Rather than go into all the reasons why pair programming is good, I'll point you to www.pairprogramming.com and Pair Programming Illuminated.


    Tuesday, September 24, 2013

    Test Driven Development is about Designing, Not Testing

    (Originally posted 2003.Mar.25 Tue; links may have expired.)


    In a recent issue of STQE Magazine, Joel Spolsky wrote that Test Driven Development (TDD) doesn't substitute for "normal" testing. It seems like he doesn't understand that test driven development is about low-level design, not testing. Programmer Tests are a happy (and intentional) side-effect of the design and refactoring process. It is to avoid this misunderstanding that I prefer to call TDD "Test Driven Design".

    Ron Jeffries and Scott Ambler had a little spat on the Agile Modeling Mailing List about TDD, not about whether it constitutes "design", but on how much design "up-front" it entails. Scott started it by writing here "An important observation is that both TDD and AMDD [Agile Model-Driven Development] are based on the idea that you should think through your design before you code. With TDD you do so by writing tests whereas with AMDD you do so by creating diagrams or other types of models such as Class Responsibility Collaborator (CRC) cards."

    Ron replied "Does TDD suggest that you "think through your design before you code"? I see no such thing in TDD. In TDD we write ONE test, then make it work, then write another." [He's leaving out the refactoring step here, which is another area of design in TDD.]


    Maybe Ron doesn't think writing each test is "thinking" or "designing", but I do. At the risk of being snide, I assert that each test represents more thinking than a lot of programmers do when they write code without tests. Perhaps Ron's extensive experience has made his designing unconscious.

    When writing the test, you think about the API, the goal of the API, and how to verify the goal is met. That's design. Before you start writing the tests, you think about whether to extend an existing class (and its tests) or to start a new class and new tests. That's higher level design. After you write a test and make it pass, then you look to see if there is duplication or other design smells to be refactored away. Still more design. Perhaps Ron thinks this refactoring step is the only design step in TDD.

    Check out Kent Beck's book: Test-Driven Development: By Example for an introduction to TDD. Unfortunately, only a very experienced (zen-master-level) programmer like Kent Beck can take the refactoring step of TDD (remove duplication) and derive all the other good design principles from that. So read Robert Martin's book Agile Software Development: Principles, Patterns, and Practices, which not only uses TDD extensively in its copious examples, but also documents design principles that every programmer should know.