Direct and Inverse Distribution of Neural Networks.
| dc.contributor.author | Sultonov Sarvar Mahammadodilovich | |
| dc.date.accessioned | 2025-12-28T13:47:38Z | |
| dc.date.issued | 2023-11-28 | |
| dc.description.abstract | The most common training method of neural networks is to successively propagate the observation vectors and determine the weight coefficients in such a way that the output values are as close as possible to the required data. This is called tutoring. Because for each vector observation we have the desired result. And we, accordingly, require the result from the network to be exactly close to the desired value. It is possible to create an algorithm that finds the weighting coefficients in the best way (maximum speed, maximum value close to the required result). | |
| dc.format | application/pdf | |
| dc.identifier.uri | https://periodica.org/index.php/journal/article/view/671 | |
| dc.identifier.uri | https://asianeducationindex.com/handle/123456789/6915 | |
| dc.language.iso | eng | |
| dc.publisher | Periodica Journal | |
| dc.relation | https://periodica.org/index.php/journal/article/view/671/572 | |
| dc.rights | https://creativecommons.org/licenses/by-nc/4.0 | |
| dc.source | Periodica Journal of Modern Philosophy, Social Sciences and Humanities; Vol. 24 (2023): PERIODICAL; 48-51 | |
| dc.source | 2720-4030 | |
| dc.subject | Recurrent | |
| dc.subject | Neural Networks | |
| dc.subject | hidden layers | |
| dc.title | Direct and Inverse Distribution of Neural Networks. | |
| dc.type | info:eu-repo/semantics/article | |
| dc.type | info:eu-repo/semantics/publishedVersion | |
| dc.type | Peer-reviewed Article |
item.page.files
item.page.filesection.original.bundle
pagination.showing.detail
loading.default
- item.page.filesection.name
- mahammadodilovich_2023_direct_and_inverse_distribution_of_neura.pdf
- item.page.filesection.size
- 526.77 KB
- item.page.filesection.format
- Adobe Portable Document Format