In a previous article I introduced the design-time configuration item (DSI) and only touched the surface of this. In this article I would like to go deeper in the what DSI entails and what it could bring.
Within CICD we use many tools and each tool can hold several pieces of data stored in various forms. Let’s call all these items artifacts for simplicity’s sake. The ultimate purpose of all these artifacts is they play a role in the creation of a digital product instance, e.g. the production instance of an application, the instance that is offered to the user/customer and brings value to both user and company in one form or another. As concluded in the previous article, the aim is not to create many instances based on the same definition, as is often the case for physical products. The aim is to change the product definition and subsequently change the production instance in a controlled manner with the purpose to increase the value of the product, and do this frequently and fastly in response to market demand.
The essence lies in the change in a controlled manner. The common approach is to create a new version of the product definition, temporarily create a product instance based on that new version, and let lose all kinds of validation and verification processes, before changing the production instance. The purpose of this is to find defects in both the product definition and the reproduction/change process, correct these defects and repeat the process until an acceptable level of confidence is reached regarding the remaining known defects and the unknown ones. When that level of confidence is reached for the new version of the product definition, the new version is applied to the production instance. When we fully automate both the validation/verification process and the production instance change process, we call this CICD. I hear you think, you have told nothing new.
A different perspective
I think my perspective is slightly different from the common opinion (if that exists), related to several things.
The emphasis of IT change control lies on product instances. For me this assumes the following; either the product definitions are not trusted and/or the processes described earlier are not trusted. IMHO, any investment in change control of product instances without investment in change control of product definition and/or the related processes is not solving the real problem.
The emphasis of CICD is very much on the product-specific software components that it obscured the fact that a digital product often has a complex structure that encompasses many generic components. I think that managing your product design is managing all the elements, and managing the role they play in your product and the relationships between elements. Furthermore, you have to recognize that the majority (up to 90%) of these elements are defined upstream and that design-time reuse is applied, which should alert you to the fact of the influence these upstream elements have on your digital product (e.g. think of lifecycle management that will be forced upon you).
And what I think is most important is to recognize the relationships between the product composition, the artifacts, and CICD. Following is an example scenario.
Example of a single Design-time Configuration Item
Let’s assume team A is, amongst other stuff, is responsible for a generic Java library that is used in many places. This library is an example of a DCI. The outcome of the team’s work is shared via a repository in Artifactory. So you may say what is the difference between the artifact in Artifactory and the DCI. The difference is that the DCI spans all involved data. So to complete the tool picture let’s assume the company uses GitHub Enterprise for SCM, technical documentation and pipelines, SonarQube for code quality, Fortify for SAST and Nexus IQ for open-source dependency scanning, and ServiceNow for IT Service management. What would the lifecycle be of this DCI?
It starts with the team internally creating this library with generic functions, just for their use. A neighboring team B picks up the existence and asks team A to share it. From an enterprise’s perspective, sharing will introduce a relation between the two teams. To formally capture the ownership the library is registered in ServiceNow as an (internal) digital product with team A as owner. Before team A shares the library they want to do the proper thing and enhance the documentation with the new users in mind. They do some further quality enhancements and publish the first shared release, v2.0.0, of the earlier defined product in ServiceNow. Since it is Java it is identified with GAV coordinates; com.thecomp.niftylib:niftylib:v2.0.0. So regarding this single item, the following tools and subsequent data/artifacts are involved.
- ServiceNow — Registration of both the digital product, the released version, and the relations with the following underlying tools and the artifacts associated with that version.
- GitHub Enterprise, Repos and Actions — Java code, unit tests, documentation, pipelines/workflow definitions, jobs, logs, etc
- SonarQube — Sonar project with reported issues and follow up.
- Fortify — Fortify application (UI)/ project (API), with Sast result and follow up.
- Nexus IQ — NXIQ application, with reported issues and follow up.
- Artifactory — Released packages, both team internal and those promoted for sharing.
Those artifacts which are associated with the specific version form a single DCI. A DCI entry in ServiceNow registers that product version and is a cross-reference for both tools and artifacts. This entry can be linked to other items in IT Service Management and creates end-to-end traceability in your value chain. What you probably do not want are links between IT Service Management and all individual artifacts directly.
To summarize; a Design-Time Configuration Item is a coherent collection of information that uniquely identifies a single version of a building block within an enterprise. The information describes the definition of the building block, how to use it in a design, how it is tested, how to create/install/deploy it, and how to operate it (whatever is needed and/or applicable). Information is represented as artifacts in different development/CI/CD tools.
Relation DCIs, RCIs, and artifacts
Remember that the purpose of a design-time CI is to specify one or more instantiated run-time CIs. Vice versa, each run-time CI should be derived from a design-time CI. A single artifact is not a CI. A comparison can be made with physical engineering where you have CIs and engineering drawings, where a CI most of the time is represented by multiple drawings.
Types of DCIs
Of course, Java development in a well-organized company is not done in isolation. Java coding is subject to external standards and the Java Guild in the company augments these external standards to form a comprehensive Java Programming Standard and Guidelines published on the intranet. The CISO section Security Engineering participates in the Java Guild as well to promote good secure coding practices. The development tool department is responsible for all the mentioned tools except ServiceNow, which is done by the IT Service Management department. Security Engineering governs the rules for Fortify and Nexus IQ. Other parts of the organization may also want to exert influence or require certain visibility; the legal department, enterprise architecture, cost management. In highly regulated industries even external parties may demand a certain level of audibility and transparency. The above shows the potential complexity of all parties having an interest in this simple IT building block.
It would be very weary for team A if they have to deal with all these stakeholders directly. From the perspective of the stakeholders, it would be cumbersome to have to communicate with all the development teams that create Java components. Configuration management is the manner to channel communication. All the requirements regarding all Java components can be channeled by classifying these components as a type, let say a type called ‘JavaDCI’. Ideally, the requirements can be codified and be checked automatically in a pipeline. This is of course what tools like SonarQube and Fortify do limited to certain aspects.
The above pattern can be replicated for all kinds of software components and is not limited to software alone.
By forming a hierarchy of types, both abstract types and concrete types, a complex structure of requirements can be managed. A single base type defines the characteristics that are applicable for all DCIs. Of these the most important are the following.
- enterprise-wide technology independent identifier; When communicating in an engineering context it should be clear what exactly we are talking about. In non-IT configuration management, this identifier is called a Part Number. It is highly recommended to have a generic technology-independent identifier. A further recommendation is to have this identifier to be non-significant, auto-generated, and verifiable. Combined with the version identifier it should identify an enterprise-wide unique DCI.
- version; For software-intensive products, where new versions are very frequently created, special attention is required for version identification. SEMVER is a well accepted standard.
- technology dependent identifier; For components of a certain technology external standards may apply that require a specific identifier structure (e.g. Java’s group id and artifact id). This is often a significant identifier, an identifier that expresses meaning. The combination of type, technology-dependent identifier, and version shall be globally unique, assuming the type scopes a DCI to a single technology.
- type identifier; The DCI Type identifier refers to the type that governs the characteristics of a collection of similar DCIs. This will allow a distinction between technologies. A DCI should refer to a concrete type (leaf in type hierarchy tree).
Composition and consequences when team B using the niftylib product of team A
In the example scenario, team B wants to use the niftylib product of team A. Now team A has formally put v2.0.0 out there, team B can also formally consume it. Team B owns digital product XYZ and their use of niftylib is captured in source code (definition of product XYZ). A discovery process scans all code and discovers the above use and registers it. The niftylib appears in the product’s Bill-Of-Materials (BOM). In case niftylib has one or more dependencies on open source, which requires co-deployment, also these are discovered and appear in the BOM. This way all stakeholders should get visibility on both the composition of product XYZ and indirectly the dependency of team B on other teams/parties. An incident that is reported on product XYZ that, when analyzed shows niftylib as probable cause, should be assignable to niftylib (and other line items in the BOM) and should be automatically assigned to team A.
For team A to be able to upload Fortify or Nexus IQ scan reports from their local workstations or from pipelines that builds niftylib, the java component should be known upfront, be onboarded/initialized, in these tools. Creating a new DCI requires registration in ServiceNow. These actions can all be combined in a single piece of automation, where the creating of a new DCI does the right things in all related tools. E.g. the onboarding/initialization of com.mycompany.niftylib:niftylib can be done in one go and naming consistency is enforced across all the tools involved.
A step further is when DCI Types can be used to declare which tools are mandatory, e.g. the type ‘JavaDCI’ specifies that SonarQube, Fortify, and Nexus IQ are mandatory for all Java components implying that a build should result in scan results in all three tools.
A next step, is including tools that do not require onboarding. If the Java Programming Standard and Guidelines prescribe or recommend certain things, e.g. project structure, choice of certain frameworks, unit test setup, documentation structure, etc, the type ‘JavaDCI’ can be extended with the appropriate data/definitions/boiler-plate-code to allow automation use such definitions instead of hardcoding templates.
The initialization of a DCI can be extended to the digital product level. This requires that upfront enough information is supplied. Insight into the existing product portfolio and each product composition can lead to design pattern as a means to provide the input for initializing new products.
Automation can not only be applied at the beginning of a product’s life cycle but also at the end. If the product composition is available, also decommissioning an application can be automated and both CICD data and the product instances can be treated in a standard approach (e.g. archiving CICD data, dismantling product instances, archiving production data).
Continuous quality & security assessment
Once the products in your digital product portfolio have a BOM, continuous quality & security processes are much easier to implement. Quality and security assessments should not only rely on CICD pipelines being run since policies, standards and guidelines can change, new vulnerabilities will be found, and as a consequence, new issues can emerge just moments after production deployment. Good portfolio management and configuration management allow these new issues to be brought to the attention of their owners and gives management continuous transparency of the quality of the product portfolio.
DCIs identify and group artifacts into logical design units. The enterprise-wide identification of these units eases communication between the different stakeholders. Patterns of artifacts and quality aspects can be codified into types that allow a structured and declarative manner to assess the quality of the design units. Overarching patterns of units can be recognized and form higher-order constructs. Making these design-time configuration items an integral part of your product portfolio administration gives a very high level of traceability and transparency to all possible stakeholders and allows all kind of automation otherwise not possible.
This article was previously published on Medium.