Project hadoop trabalhos

Filtro

Minhas pesquisas recentes
Filtrar por:
Orçamento
para
para
para
Tipo
Habilidades
Idiomas
    Estado do Trabalho
    2,000 project hadoop trabalhos encontrados, preços em EUR

    This is strictly a WFO job. Only local candidates from Chennai OR those who are ready to relocate to Chennai should apply. Duration: 6 months plus Role1: Bigdata, Hadoop,sprk,airflow, CICD, python (scripting), devops. 3-8 years experience. Role 2: Data product manager - Tableau, SQL queries with managerial skills 5-8 years experience. Role 3: BI engineer - SQL,SQL Server, ETL, Tableau, data modelling, scripting, agile, python 5-8 years experience Role 4: Data Engineer - Big data, Hive, Spark, Python 3-7 years experience Very good communication skills is mandatory Must be ready to work from our office in Chennai Timings: 9 hours, IST business hours, Monday - Friday.

    €1247 (Avg Bid)
    €1247 Média
    2 ofertas

    ...задачи по ETL 50%, а также 10% ML и 40% DS. Стек: SQL+PL/SQL Greenplum, Teradata, MSSQL, MySQL, SQLite,… DWH+ETL работа с хранилищами данных Hadoop Hive, Impala, Spark, Oozie, … Python pandas, numpy, pyspark, … Machine Learning Что делать: Рефакторинг прототипов моделей машинного обучения от команды DataScience – адаптация кода к пайплайну поставки моделей в промышленную эксплуатацию с сохранением результатов и оценки моделей в хранилище Greenplum (MLOps) Проектирование и разработка корпоративной аналитической платформы Разработка процессов построения пакетной и near real time аналитики Разработка, поддержка и оптимизация ETL на платформах Greenplum и Hadoop Поддержание технической документации в актуальном состоянии

    €2192 (Avg Bid)
    €2192 Média
    6 ofertas

    wordpress site build + customize So php, node.js, Java, .NET Hadoop?

    €1108 (Avg Bid)
    €1108 Média
    118 ofertas

    / fs needs reconfiguration for hdfs layout

    €23 / hr (Avg Bid)
    €23 / hr Média
    8 ofertas

    Includes Java in coding part and other than that we require experience in Aws, Hadoop, and Spark

    €5 / hr (Avg Bid)
    €5 / hr Média
    6 ofertas

    Hi, I am looking data analyst job timing is US healthcare claims and provide required support in Excel,sql,Db2,Hadoop,Informatica (basics).Daily one or two hours

    €444 (Avg Bid)
    €444 Média
    29 ofertas

    ...operations of our data scientists. The data engineer will be responsible for employing machine learning techniques to create and sustain structures that allow for the analysis of data while remaining familiar with dominant programming and deployment strategies in the field. During various aspects of this process, you should collaborate with coworkers to ensure that your approach meets the needs of each project. To ensure success as a data engineer, you should demonstrate flexibility, creativity, and the capacity to receive and utilize constructive criticism. A formidable data engineer will demonstrate unsatiated curiosity and outstanding interpersonal skills. Data Engineer Responsibilities: Liaising with coworkers and clients to elucidate the requirements for each task. Concep...

    €40 / hr (Avg Bid)
    €40 / hr Média
    18 ofertas

    diseño y creación de una infraestructura OpenStack para implementar una plataforma Big Data basa en hadoop/Spark. así como la implementación de la misma. Dentro del proyecto se necesitan tres perfiles: Administrador OpenStack Ingeniero Open Stack Desarrollo catálogo IT Los trabajos se realizarán mayoritariamente en Madrid más detalles en el archivo adjunto

    €31769 (Avg Bid)
    €31769 Média
    4 ofertas

    Need a technical author who has experience in writing on topics like AWS Azure GCP DigitalOcean Heroku Alibaba Linux Unix Windows Server (Active Directory) MySQL PostgreSQL SQL Server Oracle MongoDB Apache Cassandra Couchbase Neo4J DynamoDB Amazon Redshift Azure Synapse Google BigQuery Snowflake SQL Data Modelling ETL tools (Informatica, SSIS, Talend, Azure Data Factory, etc.) Data Pipelines Hadoop framework services (e.g. HDFS, Sqoop, Pig, Hive, Impala, Hbase, Flume, Zookeeper, etc.) Spark (EMR, Databricks etc.) Tableau PowerBI Artificial Intelligence Machine Learning Natural Language Processing Python C++ C# Java Ruby Golang Node.js JavaScript .NET Swift Android Shell scripting Powershell HTML5 AngularJS ReactJS VueJS Django Flask Git CI/CD (Jenkins, Bamboo, TeamCity, Octopus Depl...

    €32 (Avg Bid)
    €32 Média
    23 ofertas

    We are leading training center Ni analytics india looking for Experienced Data Engineer to train our students online live class on weekdays / weekends. ideal candidate should have data engineer work experience of 4 to 8 years on Bigdata hadoop, spark, pyspark, kafka, azure experience etc. we are requesting interested candidates within our budget to respond as we get regular enquiry from individual or corporate firms. this is urgent requirement kindly respond quickly. thank you

    €344 (Avg Bid)
    €344 Média
    4 ofertas

    ...disk volume of a powered down vm, causing vdfs missing file. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions per ...

    €78 (Avg Bid)
    €78 Média
    9 ofertas

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    €30 / hr (Avg Bid)
    €30 / hr Média
    5 ofertas

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    €80 (Avg Bid)
    €80 Média
    3 ofertas

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    €21 (Avg Bid)
    €21 Média
    2 ofertas

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    €623 (Avg Bid)
    €623 Média
    7 ofertas

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    €828 (Avg Bid)
    €828 Média
    6 ofertas
    Data Analyst Encerrado left

    Digital Analyst: Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduc...understanding of digital and data analytics • Excellent written, oral, and communication skills • Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy • Keen eye for UI on PBI/Tableau – can recommend designs independently • Can handle complicated data transformations on DBs & Big Data (Hadoop) • Familiar...

    €11 (Avg Bid)
    €11 Média
    2 ofertas
    Hive Projects Encerrado left

    A mini project with report, source code on any topic in HIVE and Hadoop program projects.

    €61 (Avg Bid)
    €61 Média
    3 ofertas
    Digital Analyst Encerrado left

    Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduct analytics deep di...understanding of digital and data analytics • Excellent written, oral, and communication skills • Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy • Keen eye for UI on PBI/Tableau – can recommend designs independently • Can handle complicated data transformations on DBs & Big Data (Hadoop) • Familiarit...

    €26 (Avg Bid)
    €26 Média
    6 ofertas
    Hadoop EMR setup Encerrado left

    Hadoop EMR setup and Data migration from azure to AWS

    €18 / hr (Avg Bid)
    €18 / hr Média
    11 ofertas
    Hadoop Expert Encerrado left

    Looking for a person who can help me install a Hadoop

    €5 / hr (Avg Bid)
    €5 / hr Média
    2 ofertas

    Linux+Hadoop cloud migration azure Data and on prem Data (Cloudera hadoop) to AWS Cloudera Azure AWS DEVOPS Database Migration from on prem to AWS

    €18 / hr (Avg Bid)
    €18 / hr Média
    10 ofertas

    ※ Please, see the attached, and offer your price quote with questions [Price and time is negotiable] ※ Will need your help from end of Dec ~ Jan, 2023 1) Manual : Creating development and installation manual for overall service implementation guideline using HDFS – Impala API >All details must be provided : command/option/setting file/Config etc. > We will use your manual to create our own HDFS used solution >Additional two to four weeks of take-over time [We can ask some questions when the process does not work under the manual process] 2. Consulting : Providing solutions for the heavy load section(date inter delay) when data is insert through HDFS >Data should be processed in 3 minutes, but sometimes it takes more time > Solutions for how we can remove or de...

    €935 (Avg Bid)
    €935 Média
    7 ofertas

    Hadoop,linux, anisible,cloud and good communication skills required

    €7 / hr (Avg Bid)
    €7 / hr Média
    1 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    €94 (Avg Bid)
    €94 Média
    4 ofertas

    Need bigdata and Hadoop tools some them like spark sql, Hadoop, hive and databricks , data lakes

    €28 (Avg Bid)
    €28 Média
    6 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    €98 (Avg Bid)
    €98 Média
    5 ofertas

    Require a developer who has good experience in devops support for 2 to 3 years, Which includes Hadoop Services windows, Linux and Ansible with little cloud touch.

    €7 / hr (Avg Bid)
    €7 / hr Média
    7 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    €115 (Avg Bid)
    €115 Média
    4 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    €131 (Avg Bid)
    €131 Média
    6 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    €91 (Avg Bid)
    €91 Média
    3 ofertas

    The objective of this assignment is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time.

    €112 (Avg Bid)
    €112 Média
    16 ofertas

    1. Implement the straggler solution using the approach below a) Develop a method to detect slow tasks (stragglers) in the Hadoop MapReduce framework using Progress Score (PS), Progress Rate (PR) and Remaining Time (RT) metrics b) Develop a method of selecting idle nodes to replicate detected slow tasks using the CPU time and Memory Status (MS) of the idle nodes. c) Develop a method for scheduling the slow tasks to appropriate idle nodes using CPU time and Memory Status of the idle nodes. 2. A good report on the implementation with graphics 3. A recorded execution process Use any certified data to test the efficiency of the methods

    €174 (Avg Bid)
    Urgente
    €174 Média
    11 ofertas
    Stack : DATA ENG Encerrado left

    Stack : DATA ENG 1. AWS 2. SPARK / HADOOP 3. PYTHON 4. Terraform

    €12 / hr (Avg Bid)
    €12 / hr Média
    3 ofertas

    I have an input text file and a mapper and reducer file which outputs the total count of each word in the text file. I would like to have the mapper and reducer file output only the top 20 words (and their count) with the highest count. The files use and I wanna be able to run them in hadoop.

    €129 (Avg Bid)
    €129 Média
    12 ofertas

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals

    €10 / hr (Avg Bid)
    €10 / hr Média
    2 ofertas

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals

    €16 / hr (Avg Bid)
    €16 / hr Média
    2 ofertas

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals Please contact me asap Thanks David

    €17 / hr (Avg Bid)
    €17 / hr Média
    21 ofertas

    I have some problems to be completed using Hadoop

    €11 (Avg Bid)
    €11 Média
    1 ofertas

    Hi, we are looking for experienced person in "Hadoop" Need to Give Job Support By connecting remotely and taking mouse controls for Indian guy living in US USD- 300$/Month 2hrs/day 5days/week Timings- Anytime After 7P.M IST will works Any 2hrs Before 10a,m IST

    €234 (Avg Bid)
    €234 Média
    1 ofertas

    Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure

    €383 (Avg Bid)
    €383 Média
    15 ofertas

    Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure

    €121 (Avg Bid)
    €121 Média
    8 ofertas

    ...ORDER BY AVG(d_year) Consider a Hadoop job that processes an input data file of size equal to 179 disk blocks (179 different blocks, not considering HDFS replication factor). The mapper in this job requires 1 minute to read and fully process a single block of data. Reducer requires 1 second (not minute) to produce an answer for one key worth of values and there are a total of 3000 distinct keys (mappers generate a lot more key-value pairs, but keys only occur in the 1-3000 range for a total of 3000 unique entries). Assume that each node has a reducer and that the keys are distributed evenly. The total cost will consist of time to perform the Map phase plus the cost to perform the Reduce phase. How long will it take to complete the job if you only had one Hadoop worker n...

    €187 (Avg Bid)
    €187 Média
    1 ofertas

    I need someone to solve the attached questions They're about map reduce and Hadoop and pig and require python skills as well I attached an example of some expected solutions

    €19 (Avg Bid)
    €19 Média
    10 ofertas

    I can successfully run the Mapreduce job on the server. But when I want to send this job as yarn remote client with java(via yarn Rest api), I get the following error. I want to submit this job successfully via Remote Client(Yarn Rest Api.)

    €11 (Avg Bid)
    €11 Média
    3 ofertas

    Looking for Python and Scala expert, Candidate should have knowledge in Big data domains such as Hadoop, spark, hive, etc. Knowledge of Azure Cloud is a plus. Share your CV.

    €666 (Avg Bid)
    €666 Média
    8 ofertas
    hadoop project -- 2 Encerrado left

    block matrix addition should be done using map reduce

    €56 (Avg Bid)
    €56 Média
    2 ofertas
    hadoop project Encerrado left

    block matrix addition should be done using map reduce

    €178 (Avg Bid)
    €178 Média
    11 ofertas
    Hadoop Developmet Encerrado left

    Current use technology stack – Apache Hadoop – (3. 1version ) cluster production. Urgently deployment of a resource who has experience in Azure Data lake migration , Hadoop, Kafka and NiFi. Setup the minimum required services and setup the data lake on azure then migrates sample data that customer will provide. We're looking for Hadoop Developer for Three Months contract role. It's purely work from home and flexible timings. Please get back to us if you're interested. Job Description is given below. 1. Current use technology stack – Apache Hadoop – (3. 1version ) cluster production. 2. Urgently deployment of a resource who has experience in Azure Data lake migration , Hadoop, Kafka and NiFi. 3. Setup the minimum r...

    €12 / hr (Avg Bid)
    €12 / hr Média
    4 ofertas