Hadoop Çatısının Bulut Ortamında Gerçeklenmesi Ve Terabyte Sort Deneyleri
Hadoop framework employs MapReduce programming paradigm to process big data by distributing data across a cluster and aggregating. MapReduce is one of the methods used to process big data hosted on large clusters. In this method, jobs are processed by dividing into small pieces and distributing over...
Saved in:
| Main Authors: | G. Ozen, R. Sultanov |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Kyrgyz Turkish Manas University
2015-05-01
|
| Series: | MANAS: Journal of Engineering |
| Subjects: | |
| Online Access: | https://dergipark.org.tr/en/download/article-file/575941 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Comparison of Hadoop Mapreduce and Apache Spark in Big Data Processing with Hgrid247-DE
by: Firmania Dwi Utami, et al.
Published: (2024-11-01) -
Big Data Analytics for Healthcare Industry: Impact, Applications, and Tools
by: Sunil Kumar, et al.
Published: (2019-03-01) -
Replication-Based Query Management for Resource Allocation Using Hadoop and MapReduce over Big Data
by: Ankit Kumar, et al.
Published: (2023-12-01) -
Enhancing Medical Big Data Analytics: A Hadoop and FP-Growth Algorithm Approach for Cloud Computing
by: Rong Hu, et al.
Published: (2025-01-01) -
A Mini-Review of Machine Learning in Big Data Analytics: Applications, Challenges, and Prospects
by: Isaac Kofi Nti, et al.
Published: (2022-06-01)