Reinforced Learning with Human Feedback
(Redirected from RLHF)
Jump to navigation
Jump to search
Reinforced Learning with Human Feedback can be abbreviated to RLHF. This is a testing or training approach for data scientists to use human feedback to make sense of or give social meaning to data. One approach to think of RLHF at scale is to include all human annotation and data editing history. This is especially true with a web-based interface that gathers human inputs on mobile networked devices. Furthermore, if all of this history is arranged and stored with block numbers of some public blockchains, the incidence of all of these RLHF operations may be globally integrated to reflect some statistical behavior of collective human intent. This is where individual and collective accountability can be operationalized.
References
Related Pages