Difference between revisions of "Data Deployment"

From PKC
Jump to navigation Jump to search
imported>Benkoo
(Created page with "Data Deployment is about orchestrating hardware and software resources to ensure data services would be accessible by its desired users. In the context of Data Governance,...")
 
 
(8 intermediate revisions by 5 users not shown)
Line 1: Line 1:
Data Deployment is about orchestrating hardware and software resources to ensure data services would be accessible by its desired users. In the context of [[Data Governance]], '''Data Deployment''' is one of the four modules that focuses on the operational and project management aspects of data asset management.  
Data Deployment is about orchestrating hardware and software resources to ensure data services would be accessible by its desired users. In the context of [[Data Governance]], '''Data Deployment''' is one of the four modules that focuses on the operational and project management aspects of data asset management. One way to deploy data in an organized fashion is to use [[Logic Model]]. An example can be found in the way [[PKC]] is being deployed. See page [[Task/K8s_Installation]].


=Industrial-Strength Data Deployment Solutions=
=Industrial-Strength Data Deployment Solutions=
   An important approach in Data Deployment is that many existing commercial solutions and free/open source technologies are already available, and can be leveraged to standardize the process of deploying data services. However, due to economic and politcal concerns, there are many policy and price-sensitive considerations that must be made explicit. Therefore, in the module of Data Deployment content, one should be informed of these concerns in terms of a resource lookup table, or a best-practice repository, so that known solutions can be adopted without re-inventing the wheel.
   An important approach in Data Deployment is that many existing commercial solutions and free/open source technologies are already available, and can be leveraged to standardize the process of deploying data services. However, due to economic and politcal concerns, there are many policy and price-sensitive considerations that must be made explicit. Therefore, in the module of Data Deployment content, one should be informed of these concerns in terms of a resource lookup table, or a best-practice repository, so that known solutions can be adopted without re-inventing the wheel.
=Deploying Services=
To set up a fully functional data pipeline, the following pages will provide operational guidance:
# Testing on [[Vagrant]]
# Installing [[Jenkins]]
# Installing [[Dockers and Kubernetes]]
# Launch [[PKC Data Services]]
# Online Business Plan Composition [[Business Plan Composition]]
=Visualizing Data=
# Tools such as Apache SuperSet<ref>Tutorial - Creating your first dashboard, https://apache-superset.readthedocs.io/en/0.35.1/tutorial.html</ref> can be very useful.
=References=

Latest revision as of 01:16, 25 July 2021

Data Deployment is about orchestrating hardware and software resources to ensure data services would be accessible by its desired users. In the context of Data Governance, Data Deployment is one of the four modules that focuses on the operational and project management aspects of data asset management. One way to deploy data in an organized fashion is to use Logic Model. An example can be found in the way PKC is being deployed. See page Task/K8s_Installation.

Industrial-Strength Data Deployment Solutions

 An important approach in Data Deployment is that many existing commercial solutions and free/open source technologies are already available, and can be leveraged to standardize the process of deploying data services. However, due to economic and politcal concerns, there are many policy and price-sensitive considerations that must be made explicit. Therefore, in the module of Data Deployment content, one should be informed of these concerns in terms of a resource lookup table, or a best-practice repository, so that known solutions can be adopted without re-inventing the wheel.

Deploying Services

To set up a fully functional data pipeline, the following pages will provide operational guidance:

  1. Testing on Vagrant
  2. Installing Jenkins
  3. Installing Dockers and Kubernetes
  4. Launch PKC Data Services
  5. Online Business Plan Composition Business Plan Composition

Visualizing Data

  1. Tools such as Apache SuperSet[1] can be very useful.

References

  1. Tutorial - Creating your first dashboard, https://apache-superset.readthedocs.io/en/0.35.1/tutorial.html