Difference between revisions of "Inter-Organizational Workflow"
Line 11: | Line 11: | ||
# Files can be managed using [[Content-addressing]] networks, their content will not be changeable, since they can be stored in [[IPFS]] format. | # Files can be managed using [[Content-addressing]] networks, their content will not be changeable, since they can be stored in [[IPFS]] format. | ||
# Pages are composed of data content, style templates, and UI/UX code. This requires certain test and verification procedure to endure their healthy operations. | # Pages are composed of data content, style templates, and UI/UX code. This requires certain test and verification procedure to endure their healthy operations. | ||
# Services are programs that have well-known names and should be provisioned on computing devices with service quality assessment. They are usually associated with [[Docker]]-like container technologies and have published names registered in places like [[Docker hub]]. | # Services are programs that have well-known names and should be provisioned on computing devices with service quality assessment. They are usually associated with [[Docker]]-like container technologies and have published names registered in places like [[Docker hub]]. As service provisioning technologies mature, tools such as [[Kubernetes]] will be managing services with service quality real-time updates, so that the behavior of data services can be tracked and diagnosed with industry-strength protocols. | ||
== Notes == | == Notes == |
Revision as of 08:49, 9 February 2022
Also known as: PKC DevOps Cycle
Introduction
The purpose of PKC Workflow, or the PKC DevOps Cycle, is about leveraging well-known data processing tools to manage source code, binary files in a unifying abstraction framework. A framework that treats data in three types of universal abstractions.
- File from the Past: All data collection in the past can be bundled in Files.
- Page of the Present: All data to be interactive presented to human agents are shown as Pages.
- Service of the Future: All data to be obtained by other data manipulation processes are known as Services.
By managing the classifying data assets in terms of files, pages, and services using version control systems, we can incrementally sift through data in three namespaces that intuitively labels the nature of data according to time progression, which reflects the partially ordered nature of PKC Workflow, which is also the universal property of each DevOps cycle.
Given the above-mentioned assumptions, data asset can be organized according to these three types in the following way:
- Files can be managed using Content-addressing networks, their content will not be changeable, since they can be stored in IPFS format.
- Pages are composed of data content, style templates, and UI/UX code. This requires certain test and verification procedure to endure their healthy operations.
- Services are programs that have well-known names and should be provisioned on computing devices with service quality assessment. They are usually associated with Docker-like container technologies and have published names registered in places like Docker hub. As service provisioning technologies mature, tools such as Kubernetes will be managing services with service quality real-time updates, so that the behavior of data services can be tracked and diagnosed with industry-strength protocols.
Notes
- When writing a logic model, one should be aware of the difference between concept and instance.
- A logic model is composed of lots of submodels. When not intending to specify the abstract part of them, one could only use Function Model.
- What is the relationship between the model submodules, and the relationships among all the subfunctions?
- Note: Sometimes, the input and process are ambiguous. For example, the Service namespace is required to achieve the goal. It might be an input or the product along the process. In general, both the input and process contain uncertainty and need a decision.
- The parameter of Logic Model is minimized to its name, which is the most important part of it. The name should be summarized from its value.
- Note that, when naming as Jenkins, it means the resource itself, but when naming as Jenkin Implementation On PKC, it consists of more context information therefore is more suitable.
- I was intending to name "PKC Workflow/Jenkins Integration" for the PCK Workflow's submodel. However, a more proper name might be Task/Jenkins Integration, and then take its output to PKC Workflow/Automation. The organization of the PKC Workflow should be the project, and the Workflow should be the desired output of the project. The Task category is for moving to that state. So the task could be the process of a Project, and the output of the task could serve as the process of the workflow.
- Each goal is associated with a static plan and dynamic process.
- To specify input and output from a logic model, we could get the input/output on every subprocess in the process (by transclusion)
- I renamed some models
- TLA+ Workflow -> System Verification
- Docker Workflow -> Docker registry
- Question: How should we name? Naming is a kind of summarization that loses information.
Logic Model (PKC Workflow) Template:LogicModel 02 9, 2022 | ||||||
---|---|---|---|---|---|---|
| ||||||
| ||||||
|