Big Data Infrastructure Architect Engineering - San Antonio, TX at Geebo

Big Data Infrastructure Architect

Company Name:
Valleysoft Inc
Contribute to physical design and development of the physical data model in cooperation with the Data Analysts
Work with the DAs, BAs and developers to understand access paths, volumes and retention requirements
Design indexing strategies and performance driven optimizations and de-normalizations
Design and creation of views, triggers, stored procedures etc
Apply physical naming standards to all database objects
Familiarity with Erwin and experience maintaining models in a shared environment
Generation of DDL from Erwin and managing versions of models and DDL
Management of release packages including both the release of database / schema changes and data changes
Develop repeatable release management and version control processes
Be the primary interface between the EDS team and the BXP Oracle DBAs
Apply and manage Cassandra/NoSQL data models.
Administer Datastax Cassandra Cluster, perform maintenance operations, monitor database performance, do periodic backups and rebalance cluster etc.
Work with cloud operations team, developers and support staff.
Qualifications
Minimum 2 years of experience in scripting languages like shell, python etc.
In-depth understanding of RDBMS concepts and Oracle products including Oracle 11g.
Hands-on experience in PL/SQL programming and SQL Queries.
Experience and knowledge in Logical and physical database design.
Strong experience in indexing strategies and procedures.
Experience in handling large volume of data and performance tuning procedures including SQL Tuning, Indexing, Partitioning, DB advisors etc.
Experience in administering database security.
Experience in installing and maintaining Real application clusters (RAC).
Good knowledge and experience in the migration/code promotion procedures.
Need to possess excellent technical and customer facing skills.
Back-up and Recovery procedures like RMAN, Cloning, Data Guard, Export/Import etc.
High proficiency with Linux/Unix and open source tools/platforms
Knowledge with OS and Infrastructure (Cloud and On-Premise) in an operational environment
Ability to balance and tune cluster nodes for performance
Experience with network monitoring tools such as HP OpenView
Experience with various virtualization technologies, preferably VMware Hyperic
Hands-on experience with multi-terabyte Hadoop/MapReduce Infrastructure is a plus
Experience with large scale environments build and support including design, capacity planning, cluster set up, performance tuning and monitoring
Understanding of Cassandra and Hadoop eco system such as HDFS, MapReduce, Pig, Hadoop streaming, and hive is a plus
Understanding of JVM administration and crash trace analysis
Experience with applications that ingest and transform large amounts (TB) of data
Experience with NoSQL database a plus
Experience with production / disaster recovery datacenter setup
Experience with Maven and Java is a plus
If you are interested please direct apply here
http://jobsbridge.com/JobSearch/View.aspx?JobId=30624Estimated Salary: $20 to $28 per hour based on qualifications.

Don't Be a Victim of Fraud

  • Electronic Scams
  • Home-based jobs
  • Fake Rentals
  • Bad Buyers
  • Non-Existent Merchandise
  • Secondhand Items
  • More...

Don't Be Fooled

The fraudster will send a check to the victim who has accepted a job. The check can be for multiple reasons such as signing bonus, supplies, etc. The victim will be instructed to deposit the check and use the money for any of these reasons and then instructed to send the remaining funds to the fraudster. The check will bounce and the victim is left responsible.