Home       |     Overview      |     Candidate Login      |     Post Resume       |     Contact us
 
  
     
     
Search Jobs
     
Keywords,Title,Skills,Company  Location,City,State,Zip  
  Advanced Search
     
Hadoop Data Warehousing Pyspark Cloudera Management Hive Kafka PLSQL Python Flume S
 
Requirement id 115812
Job title Specialist
Job location in Columbus, OH
Skills required Big Data, Hadoop, Design And Development, Data Warehousing Pyspark Cloudera Manage
Open Date 07-Apr-2021
Close Date
Job type Contract
Duration 2 Months
Compensation DOE
Status requirement ---
Job interview type ---
   Email Recruiter: coolsoft
Job Description Specialist: Big Data, Hadoop, Design And Development, Data Warehousing Pyspark Cloudera Manage

start date:5/3/2021

End date:06/30/2021

submission deadline:4/12/2021

client info : MCD


Note:

* Interviews: TEAMS

* The selected candidate does not have to be local to Columbus, OH and can work remotely

* REPOST OF 82361 AND 83069 - DO NOT RESUBMIT CANDIDATES


Description:

The Technical Specialist will be responsible for Medicaid Enterprise data warehouse design, development,
implementation, migration, maintenance and operation activities. Works closely with Data Governance and
Analytics team. The candidate will closely with Data Governance and Analytics team. Will be one of the
key technical resource for data warehouse projects for various Enterprise data warehouse projects and
building critical data marts, data ingestion to Big Data platform for data analytics and exchange with State
and Medicaid partners. This position is a member of Medicaid ITS and works closely with the Business
Intelligence & Data Analytics team.

Responsibilities:

Participate in Team activities, Design discussions, Stand up meetings and planning Review with
team.

Perform data analysis, data profiling, data quality and data ingestion in various layers using big
data/Hadoop/Hive/Impala queries, PySpark programs and UNIX shell scripts.

Follow the organization coding standard document, Create mappings, sessions and workflows as
per the mapping specification document.

Perform Gap and impact analysis of ETL and IOP jobs for the new requirement and enhancements.

Create jobs in Hadoop using SQOOP, PYSPARK and Stream Sets to meet the business user
needs.

Create mockup data, perform Unit testing and capture the result sets against the jobs developed in lower environment.

Updating the production support Run book, Control M schedule document as per the production release.

Create and update design documents, provide detail description about workflows after every production release.

Continuously monitor the production data loads, fix the issues, update the tracker document with the issues, Identify the performance issues.

Performance tuning long running ETL/ELT jobs by creating partitions, enabling full load and other standard approaches.

Perform Quality assurance check, Reconciliation post data loads and communicate to vendor for receiving fixed data.

Participate in ETL/ELT code review and design re-usable frameworks.

Create Remedy/Service Now tickets to fix production issues, create Support Requests to deploy Database, Hadoop, Hive, Impala, UNIX, ETL/ELT and SAS code to UAT environment.

Create Remedy/Service Now tickets and/or incidents to trigger Control M jobs for FTP and ETL/ELT jobs on ADHOC, daily, weekly, Monthly and quarterly basis as needed.


Model and create STAGE / ODS / Data warehouse Hive and Impala tables as and when needed.

Create Change requests, workplan, Test results, BCAB checklist documents for the code deployment to production environment and perform the code validation post deployment.


Work with Hadoop Admin, ETL and SAS admin teams for code deployments and health checks.


Create re-usable UNIX shell scripts for file archival, file validations and Hadoop workflow looping.

Create re-usable framework for Audit Balance Control to capture Reconciliation, mapping parameters and variables, serves as single point of reference for workflows.

Create Py Spark programs to ingest historical and incremental data.

Create SQOOP scripts to ingest historical data from EDW oracle database to Hadoop IOP, created HIVE tables and Impala views creation scripts for Dimension tables.
 
Call 502-379-4456 Ext 100 for more details. Please provide Requirement id: 115812 while calling.
 
Other jobs in OH: Cincinnati (1), Cleveland (1), Columbus (233), Dublin (8), Solon (2),
Big Data job openings in Columbus, OH
Jobs List

Technical Specialist 3/TS3 - 83069
Create date: 10-Mar-2021
start date:3/15/2021

End date:06/30/2021

submission deadline:Monday 3/15 @ 10 am EST

client info : MCD

Note:

* Interviews: TEAMS

Description:

The Technical Specialist will be responsible for Medicaid Enterprise data warehou.... (This job is for - AnalysisHadoop Jobs in OH Columbus Specialist - (in Columbus, OH))

Sr. Data Warehouse Designer - 57812
Create date: 23-Feb-2021
Job industry:
Public Sector and Government

On-site/Remote:
On-site

Description:

Responsibilities

Our Client in Columbus, Ohio is seeking a Sr. Data Warehouse Designer / Developer

CANDIDATES MUST BE LOCAL TO CENTRAL OHIO FOR CONSIDERATION

The Sr Developer/ Designer w.... (This job is for - Analysis Jobs in OH Columbus DBA - (in Columbus, OH))

Technical Specialist 3/TS3 - 82361 - SP
Create date: 22-Feb-2021
start date:03/15/2021

End date:06/30/2021

submission deadline:Wednesday 2/24 @ 10 am EST

client info : MCD


Note:

* Interviews: TEAMS


Description:


• Participate in Team activities, Design discussions, Stand .... (This job is for - Hadoop Data Warehousing Jobs in OH Columbus Specialist - (in Columbus, OH))

Data Warehouse Designer / Developer - 57591
Create date: 22-Dec-2020
On-site/Remote:
On-site

Job industry:
Public Sector and Government

Description:

Responsibilities

Our client in downtown Columbus, Ohio is seeking an experienced Data Warehouse Designer / Developer. - Technical Specialist

CANDIDATES MUST BE LOCAL TO CENTRAL OHIO FOR CONSIDERATI.... (This job is for - Analysis Jobs in OH Columbus Developer - (in Columbus, OH))

Data Architect - 52448
Create date: 10-Sep-2019
Description:

Applicants must have 10+ years of Architecture and Data experience in an enterprise level environment.



Company & Group Details of the Data Architect:



The Data Architect will report to the Manager of Data Architecture and work closely with the Chief Data and Analytics Offic.... (This job is for - datawarehouse Jobs in OH Columbus Architect - (in Columbus, OH))
 
 Big Data job openings in other states
Jobs List

Data/Kafka Engineer - 21-00594
Create date: 26-Mar-2021
On-site/Remote:
On-site

Job industry:
Automotive and Transportation

Description:

Responsibilities

Job Title: Senior Data/Kafka Engineer

Location: Fremont, CA

Duration: 6+ months

Ursus is looking for a strong Data Engineer to support our clients Digital Ex.... (This job is for - job Jobs in CA Fremont Engineer - (in Fremont, CA))

Big Data Developer - 73185
Create date: 22-Mar-2021
Job Description:

This project is a focused on datafactory. It pulls in data from other agencies, insurance, providers, etc.. They clean up the data
prepares it for reporting (cleaning it up). this team loads it into the backend of other applications/reporting tools.

Strong Java application developer with big data. (This job is for - job Jobs in MN Minneapolis Developer - (in Minneapolis, MN))

Big Data Engineer
Create date: 22-Mar-2021
Job Description

Big Data Engineer (#2 Openings)

Remote

5 Months Contract

Client:

Client was built to help professionals achieve more in their careers, and every day millions of people use our products to make connections, discover opportunities and gain insights. Their global reach means
.... (This job is for - job Jobs in CA MountainView Engineer - (in Mountain View, CA))

Sr Hadoop Developer - 72590
Create date: 03-Mar-2021
Client Info :Visa - TX

Job Description

Sr Hadoop Developer - Austin, TX

Key Responsibilities

* Minimum 5 years of experience in design and development of large-scale Information products and services with following technologies: Big Data Hadoop platform technologies, Java, Scala, RDBMS

.... (This job is for - Hadoob Hive Jobs in TX Austin Developer - (in Austin, TX))

Big Data Engineer - 72438
Create date: 01-Mar-2021
Note: Filled 1 opening last week, we have another one open now!

Candidate must be our W2 Employee.

Job Description:

Client: Experian

Position: Big Data Engineer

Location: Costa Mesa, CA or San Jose, CA or Allen, TX (remote to start)

Length: 12 Month +

Must have experie.... (This job is for - job Jobs in CA CostaMesa Engineer - (in COSTA MESA, CA))
 
 Big Data job openings in OH
Jobs List

Sr SSIS Developer W/Hadoop And/Or PDW - J-11-328-373
Create date: 11-Jun-2018
Description:
Experis is seeking a Sr. SSIS Developer with Hadoop and/or PDW (Parallel Data Warehouse) experience for a 6+month contract position with a leading client in Mayfield, OH.

**Approved 3rd Party - Subcontractors Welcomed**

TEAM: Quoting Data Services/Data Warehouse Team

ROLE: As Sr. SSIS Develop.... (This job is for - SSISSYNCSORT Jobs in OH Mayfield Developer - (in Mayfield, OH))

Developer III, Data Engineer - 33106
Create date: 27-Mar-2018
Client Info : 84.51

Candidate must be our W2 Employee

Job Description :

Apex Systems, the nations 2nd largest IT Staffing organization, has an immediate opportunity for a Developer/Data Engineer with an organization located in Cincinnati, OH. This is a long term contract role.


Position Descriptio.... (This job is for - Spark Python Jobs in OH Cincinnati Engineer - (in Cincinnati, OH))
(Specialist: Big Data, Hadoop, Design And Development, Data Warehousing Pyspark Cloudera Manage in Columbus, OH)
     
Search Jobs
     
Keywords,Title,Skills,Company  Location,City,State,Zip  
  Advanced Search
     

    About Us     Services    Privacy policy    Legal     Contact us