Hadoop Data Engineer (local to chicago)other related Employment listings at Geebo

Hadoop Data Engineer (local to chicago)

Only candidates local to Chicago need apply Only candidates local to Chicago need apply Only candidates local to Chicago need apply Top three skills requested; Proficiency with HDFS/ Hive/ Impala, Network/ Security, Development Experience (Java), BI Experience For large enterprise datasets, the data engineer is responsible for curating content to support key business initiatives, working primarily with data scientist and data analysts across functional disciplines.
Participants in the acquisition, cataloging, and harmonization of information aligned with the needs of business stakeholders.
Supports data consumers in understanding information context, generating fit for purpose datasets, and effectively utilizing advance analytic tools.
Key Responsibilities Include:
Planning, building and running enterprise class information management solutions across a variety of technologies (e.
g.
big data, master data, data profiling, batch processing, and data indexing technologies, Establishing advance search solutions that include synonym, inference and faceted searching Ensuring appropriate security and compliance policies are followed for information access and dissemination Defining and applying information quality and consistency business rules throughout the data processing lifecycle Collaborating with information providers to ensure quality data updates are processed in a timely fashion Enforcing and expanding use of AbbVie Common Data Model and industry standard information descriptions (ontologies, taxonomies, vocabularies, lexicons, dictionaries, thesaurus, glossaries etc ) Managing the information portal and its customer-facing resources (data catalog, data portal, etc ) Basic:
bachelor's Degree and related work experience and a strong understanding of specified functional area.
Degree in Computer Science or related discipline preferred.
Advanced degree preferred.
At least 10 years' experience in a several data processing roles such as database developer/administrator, ETL developer, data analyst, BI analytics developer, and/or solution developer of contextual search applications Experience with Informatica tools (PowerCenter, Big Data Management, Master Data Management), Cloudera CDH and ecosystem tools (SOLR, Spark, Impala, Hive, Hue, etc ), MarkLogic, SAS Analytics, python, R and Amazon Web Services preferred.
Experience with Linux and/ or network administration is a plus Top three skills requested; Proficiency with HDFS/ Hive/ Impala, Network/ Security, Development Experience (Java), BI Experience.
Estimated Salary: $20 to $28 per hour based on qualifications.

Don't Be a Victim of Fraud

  • Electronic Scams
  • Home-based jobs
  • Fake Rentals
  • Bad Buyers
  • Non-Existent Merchandise
  • Secondhand Items
  • More...

Don't Be Fooled

The fraudster will send a check to the victim who has accepted a job. The check can be for multiple reasons such as signing bonus, supplies, etc. The victim will be instructed to deposit the check and use the money for any of these reasons and then instructed to send the remaining funds to the fraudster. The check will bounce and the victim is left responsible.