Things may happen by chance but won’t last without a reason. Given than most projects outlast their initial motives, some management is needed for keeping expectations and commitments in line until completion.
When systems engineering is considered, different times with different reasons have to be taken into account:
- Business time-frames rule over requirements and is set by enterprise strategy and markets opportunities.
- System time-frames are set by organization and technological environment.
- Applications time-frame are set by development planning and resources availability.
Whereas those time-scales are to be synchronized (that’s what milestones are about), there is no reason they would be congruent given that they are set within different architectures: enterprise, functional, and technical. Whether and how those time-frames are to be consolidated is to be the primary factor of project management.
Milestones are introduced to coordinate phased tasks carried out by different teams in different time-frames; for instance when business and systems architectures are to be managed independently some documentation must be introduced as means of communication between organizational units along time.
Since milestones can be seen as frozen expectations and commitments, setting them prematurely may bear the risk of unrealistic assessments followed by defensive behaviors. Nonetheless, the management of shared engineering assets (requirements, models, programs, …) may be a necessity for projects set across architectures:
- Enterprise: locations, organization, business objects and processes.
- Systems: domains and services.
- Platforms: resources and software components.
As a corollary, managing differentiated artifacts to be used by different activities will usually entail differentiated work units.
Depending on organizational and technical constraints, projects organization is to be set somewhere between two archetypes:
- Agile: The same set of operations are repeatedly performed on seamless flows of artifacts by teams with combined skills.
- Phased: Differentiated sets of operations are separately performed on discrete flows of artifacts by teams with specific skills.
Being less constraining, the agile development model is to be the default option providing shared ownership and continuous delivery can be attained. When that’s not the case work units should be defined according organizational and technical constraints, and that is to be achieved bottom up from development flows, as opposed to top-down from predefined activities.
The flow-based approach presents a double advantage: being directly defined with regard to developed artifacts, work units can more easily automated; and the corresponding roles can be defined objectively, that is independently of organization and methodologies. As a consequence, flow-based work units can be used with both phased and iterative development models.
Whereas terminology may vary across organizations and methods, core skills and responsibilities don’t, the only difference being in the way they will be carried out: separately (phased) or in collaboration (agile).
- With regard to skills, the primary distinction is between business (enterprise level) and software engineering (systems level).
- With regard to responsibilities, the divide is between the business value associated to business processes and applications on one hand, and assets represented by organization and systems architecture on the other hand.
Once clearly positioned within that simple matrix, the roles of participants will be governed by development models:
Since phased processes come with fixed scope and schedules, roles will be defined with regard to administration (of architectures and assets) and processing (of development flows).
Since iterative processes make room for exploration and sequencing, the focus will be put on responsibilities (as set by the matrix), collaboration (for development flows) and scheduling (for backlogs and releases).
A sanguine understanding of continuous releases may assume that planning is no longer relevant, and that deployment can be carried out “on-the-fly”. But that would assume that stakeholders and product owners are ready to put aside roadmaps, overlook milestones, and force business, engineering, and operational concerns into a single time-frame. Given the importance of expectations for project management, a more realistic approach is required:
- Product roadmaps are set in business time-frames and determine development and deployment time-frames.
- Development time is set by product roadmaps and runs clockwise from project inception to software releases.
- Deployment time is also set relative to product roadmaps but it runs counterclockwise, from product deployment as planned by business back to released software components.
Just-in-time delivery implies some decoupling between engineering and business time-scales which can be achieved by a buffer between development and deployment. This buffer, combined with dynamic programming, plays a critical role in the cutback of intermediate documents and models (aka development inventories).
Management needs measurements, and in the case of systems engineering they have to address four different purposes:
- Business value of new applications.
- Functionalities of supporting systems.
- Size and complexity of software products.
- Development costs.
Whereas the first set of metrics clearly depends on the idiosyncrasies of business models and enterprise organization, the three others are supposed to deal with shared software engineering concerns and defined consistently:
- With regard to business requirements, domains can be assessed using the number of entities weighted by complexity (taxonomies and dependencies), and associated processes by the number of transactions weighted by the complexity of execution paths (extension points and dependencies).
- Metrics of system functionalities are set along two axes: the first one defined by business footprint, i.e the business entities and transactions supported by the system; the second one defined by the nature and requirements for system boundaries attributed to users, events, or devices.
- Functional measurements must be adjusted for transverse (aka non functional) requirements, i.e the operational or regulatory constraints that cannot be allocated to any specific business requirements.
Estimates for development costs will take into account technical and human resources as well as organizational contexts and constraints. Engineering costs can then be compared to business benefits in order to compute return on investment (ROI) .
Not surprisingly, this differentiated approach to metrics better befits phased development models. Since iterative development clearly hampers the practicality of function points, measurements can only be built and refined progressively; e.g, agile approaches will put the focus on development effort based on:
- Story points, which mix business and functional requirements as understood from users stories.
- Actual development outcome, updated dynamically according to metrics computed on iterations and backlog, e.g burndown and velocity.
Although traditional (function points) and agile (story points) metrics are built on opposed principles, their accuracy and effectiveness essentially depend on heuristics and experience.
As far as software engineering is concerned, quality is a quantity: it’s the probability that something will go amiss, whether because of faulty design and implementation, or due to misguided requirements. That understanding sets two primary objectives for quality management: the first is the alignment of commitments with expectations, already mentioned for project management; the second is concomitant to the first, namely traceability.
Traceability is arguably the cornerstone of project management as it is a prerequisite for any effective management of products and activities:
- To begin with, requirements must be justified and traced back to business goals and stakeholders.
- Then, project planning will use dependencies to sequence work units, set development schedules, and allocate resources.
- One step further, quality planning will need traceability to design and plan tests, and eliminate regression.
- Finally, traceability will be necessary to support maintenance and change management.
Given the exponential growth of dependencies, the only way to avoid the “spaghetti” syndrome is to use built-in mechanisms and differentiate dependencies depending on their nature: decisions, artifacts, or operations.
- First and foremost, there can be no management without full and reliable traceability of decisions with causes and consequences, both within and across architecture layers.
- Next, engineering traceability is to deal with the actual realization of decisions and the consistency of outcomes.
- Finally, operational traceability is to keep track of developed artifacts and the way they are used.
With the benefits of differentiated (decisions, engineering, operations) and layered (enterprise, systems, platforms) traceability, quality management can be organized with regard to purpose and modus operandi. To begin with tests, and taking a leaf from D. Leffingwell:
- Purpose: business-facing tests compare business expectations to product functionalities and quality of service; technology-facing tests check for intrinsic quality and use of resources.
- Modus operandi: procedures and tests that can be carried out during development vs those that must be performed on products.
As pointed by tests runs during development, quality management may go beyond products and be applied to development processes.
From Products to Processes QM
Assuming that projects organization are solutions to engineering problems, one should be able to assess their maturity and to improve it whenever possible. For that purpose the SEI’s CMMI (Capability Maturity Model Integration ) defines five levels:
- Initial: No process. Each project is managed on an ad hoc basis.
- Managed: Processes are specific to projects.
- Defined: Processes are set for the whole organization and shared across projects.
- Measured: Processes are measured and controlled.
- Optimized: Processes are assessed and improved.
Whatever the volume of data and the statistical tools employed, the relevance of process assessment fully depends on (1) objective and unbiased indicators measuring projects’ performances and, (2) clear mapping and traceability between organizational alternatives and processes’ outcomes. That can only be achieved if assessments can be carried out across products, projects, and processes:
- Traceability: from requirements to deliverables (products), work units (projects), and tasks (processes).
- Measurements: functional (products), effort (projects), performance (processes).
- Quality: verification & validation (products), tests plan and risks management (projects), quality assurance (processes).
- Reuse: artifacts and patterns (products), profiles (projects), organization (processes).
- Management: model driven engineering (products), planning and monitoring (projects), maturity assessment (processes).
Yet, although it may provide some useful pointers, the CMMI is not a substitute for project management, and compliance to routines is not a substitute for collaboration.
Projects as Non-Zero Sum Games
Contracts are as much about collaboration and trust as they are about protection. While projects are not necessarily governed by contracts, all entail some compact between business, engineering, and operational entities. From in-house developments with shared ownership on one hand, to fixed price outsourcing on the other hand, success will depend on collaboration and some community of purpose. Taking a cue from Game Theory, the primary objective of project’s undertakings should therefore to establish a non-zero sum playground where one’s benefits will not have to come from others’ losses.
For that purpose trustworthy information is a necessary condition. Yet it’s not a sufficient one, as rewards and sanctions should also make a convincing case for cooperation. As illustrated by the experience of the ill-famed waterfall fixed solutions, undertakings about scope, price, and schedule are at the basis of such cases.
The root of the scope/price/time conundrum is the discrepancy between the reliability of customers expectations and providers’ commitments:
- On one side business stakeholders are able up-front to put a value on the deployment of a set of supporting functionalities at given dates.
- On the other side system engineers cannot price their costs before a better assessment of the scope and complexity of developments.
In face of such a lopsided situation, the objective of project management should be to restore a level playing field with regard to the information available to parties, and to set rules that would open a space for win/win outcomes.
Paths to mutually beneficial agreements could take advantage of differentiated objectives and priorities: while customers focus on their return on investment (ROI), which depends on cost, quality, and timely delivery, providers put costs first and timing second. So stakes are clearly conflicting on costs, but there is room for collaboration on quality and timing, and that will bring benefits to both customers and providers.
Finally, sound rules and level playing fields are not enough, as players may be tempted to defect (aka cheat) if they think they can get over with it. Leaving out ethical considerations and invasive regulations, the best policy should be to make a convincing case for due diligence as participants should:
- Say what they know, mean what they say, say what they mean.
- Look at outcomes and deliverables, and check their validity.
- Listen to what others have to say, and take their concerns into account.
- System Engineering
- Products, Projects, Processes
- Projects as non-zero sum games
- How To Pick Development Models
- Thinking about Practices
- Models in Driving Seats
- Models Transformation
- Knowledge Based Model Transformation
- Development Patterns & Strategies
- Work Units
- Quality Management
One thought on “Thread: Project Management”
Naive regarding Quality, particularly ‘AsubO’ Operational Availability.
Quality: As Dijkstra pointed out decades ago tests and testing finds only the expected errors not all errors. More comprehensive methods such as Software Integrity Assessment, http://www.ontopilot.com must be used.
AsubO: Must be estimated throughout design and development. Usually endangered by inadequate vigilance over maintenance changes, particularly platform maintenance.