Difference between revisions of "Suggestion to Stephen"
(Created page with "After much consideration, I decided to make a brief statement summarizing what I feel may be useful to your conference. While recent events have harmed the financial potential and public image of existing blockchain and distributed database infrastructures, they retain the considerable operational value that can support the increasing potential of AI-Generated Content (AIGC)-related prospects. These infrastructures are now established entities that serve as an acceptabl...") |
|||
Line 5: | Line 5: | ||
The concept is to combine high-end data centers with the geographically distributed nature of Blockchain to create a knowledge asset brokerage institution focused on creating, filtering, sharing, and selling large language models (LLMs) to businesses and individuals who require locally tailored or customized LLMs. Allowing small firms and individuals to participate in the design and composition of LLMs fosters system variety and a broader user base, which encourages freedom of expression. | The concept is to combine high-end data centers with the geographically distributed nature of Blockchain to create a knowledge asset brokerage institution focused on creating, filtering, sharing, and selling large language models (LLMs) to businesses and individuals who require locally tailored or customized LLMs. Allowing small firms and individuals to participate in the design and composition of LLMs fosters system variety and a broader user base, which encourages freedom of expression. | ||
Because LLM is the "engine" for Generative Pre-trained Transformers (GPTs), which are evolving into a pervasive human-machine interface to a wide range of business and human activities, having both decentralized and highly aggregated computing resources to share a common LLM creation process is critical. While the pooling of computer resources assists in increasing efficiency and broadening the capabilities and performance of trained LLMs, the decentralized component aids in limiting the bias of majority | Because LLM is the "engine" for Generative Pre-trained Transformers (GPTs), which are evolving into a pervasive human-machine interface to a wide range of business and human activities, having both decentralized and highly aggregated computing resources to share a common LLM creation process is critical. While the pooling of computer resources assists in increasing efficiency and broadening the capabilities and performance of trained LLMs, the decentralized component aids in limiting the bias of majority judgments. From a social and economic standpoint, LLM is becoming into a new kind of asset that requires regular updating with a tight data processing technique, including data entry and automated data accountability policy compliance verification. Using locally adapted LLMs will invariably promote local information density, safeguard localized economic and political interests, and, most importantly, protect information/knowledge sovereignty. This is critical in terms of dispersed social fairness. (No one should let a centralized authority tell them what and how to think.) | ||
The data collecting and filtering controlling models should be implemented as "Smart Contracts," which are available in high-level source code for both human and machine inspection. Large Language Models and raw data assets will be managed as independent objects to be published, acquired, or sold to parties of interest in line with "Smart Contract-Bound" principles. Existing Blockchain infrastructures, like as Bitcoin, Ethereum, Solana, and Polygon, can manage payment and awards for data and knowledge contributions (by writing code or evaluating "smart contracts"), as well as payment implementation. | The data collecting and filtering controlling models should be implemented as "Smart Contracts," which are available in high-level source code for both human and machine inspection. Large Language Models and raw data assets will be managed as independent objects to be published, acquired, or sold to parties of interest in line with "Smart Contract-Bound" principles. Existing Blockchain infrastructures, like as Bitcoin, Ethereum, Solana, and Polygon, can manage payment and awards for data and knowledge contributions (by writing code or evaluating "smart contracts"), as well as payment implementation. | ||
Line 12: | Line 12: | ||
- edited by Grammarly and Quillbot | - edited by Grammarly and Quillbot | ||
Also related:[[A Global Network of Personal Knowledge Container]] |
Revision as of 23:14, 16 February 2023
After much consideration, I decided to make a brief statement summarizing what I feel may be useful to your conference.
While recent events have harmed the financial potential and public image of existing blockchain and distributed database infrastructures, they retain the considerable operational value that can support the increasing potential of AI-Generated Content (AIGC)-related prospects. These infrastructures are now established entities that serve as an acceptable platform for the exchange of data and cash between individuals and financial institutions.
The concept is to combine high-end data centers with the geographically distributed nature of Blockchain to create a knowledge asset brokerage institution focused on creating, filtering, sharing, and selling large language models (LLMs) to businesses and individuals who require locally tailored or customized LLMs. Allowing small firms and individuals to participate in the design and composition of LLMs fosters system variety and a broader user base, which encourages freedom of expression.
Because LLM is the "engine" for Generative Pre-trained Transformers (GPTs), which are evolving into a pervasive human-machine interface to a wide range of business and human activities, having both decentralized and highly aggregated computing resources to share a common LLM creation process is critical. While the pooling of computer resources assists in increasing efficiency and broadening the capabilities and performance of trained LLMs, the decentralized component aids in limiting the bias of majority judgments. From a social and economic standpoint, LLM is becoming into a new kind of asset that requires regular updating with a tight data processing technique, including data entry and automated data accountability policy compliance verification. Using locally adapted LLMs will invariably promote local information density, safeguard localized economic and political interests, and, most importantly, protect information/knowledge sovereignty. This is critical in terms of dispersed social fairness. (No one should let a centralized authority tell them what and how to think.)
The data collecting and filtering controlling models should be implemented as "Smart Contracts," which are available in high-level source code for both human and machine inspection. Large Language Models and raw data assets will be managed as independent objects to be published, acquired, or sold to parties of interest in line with "Smart Contract-Bound" principles. Existing Blockchain infrastructures, like as Bitcoin, Ethereum, Solana, and Polygon, can manage payment and awards for data and knowledge contributions (by writing code or evaluating "smart contracts"), as well as payment implementation.
A public event in June 2023 might expose to the general public that blockchain mining can now be switched to continually execute Machine Learning Computation on the material given by anybody with a Distributed Identity (DiD) account ready to accept the duties stated by Smart-Contracts. As a result, a workflow and a global marketplace for material exchange will be developed, and computer resources will be directed toward the productive development of new and evolving knowledge. It will also be advantageous to use the large-scale public event to teach the audience about a brand-new industry, allowing for further examination of the combined opportunities given by Web3 and AIGC.
- edited by Grammarly and Quillbot
Also related:A Global Network of Personal Knowledge Container