Anunciar vaga

Senior Technical Solution Engineer, Apache Spark™ – São Paulo

Tempo Integral
      São Paulo       18/03

Databricks Inc.

Databricks Inc. está com vaga(s) de emprego para Senior Technical Solution Engineer, Apache Spark™ – São Paulo em São Paulo

 

Cargo:

Senior Technical Solution Engineer, Apache Spark™ – São Paulo


Requisitos:

As an Apache Spark Technical Solutions Engineer, you will provide technical and consulting related solutions for the challenging Apache Spark/ML/AI/Delta/Streaming/Lakehouse reported issues by our customers and resolve any challenges involving the Databricks unified analytics platform with your comprehensive technical and client-facing skills. You will assist our customers in their Databricks journey and provide them with the guidance and expertise that they need to accomplish value and achieve their strategic goals using our products.

The impact you will have:

  • Performing initial level analysis and troubleshooting issues in Apache Spark using Apache Spark UI metrics, DAG, Event Logs for multiple customer reported job slowness issues.
  • Troubleshoot, resolve and suggest deep code-level analysis of Apache Spark to address customer issues related to Apache Spark core internals, Apache Spark SQL, Structured Streaming, Delta, Lakehouse and other Databricks runtime features.
  • Assist the customers in setting up reproducible Apache Spark problems with solutions in the areas of Apache Spark SQL, Delta, Memory Management, Performance tuning, Streaming, Data Science, Data Integration areas in Apache Spark.
  • Participate in the Designated Solutions Engineer program and guide one or two of strategic customer's daily Apache Spark and Cloud issues.
  • Coordinate with Account Executives, Customer Success Engineers and Resident Solution Architects for coordinating the customer issues and best practices guidelines.
  • Participate in screen sharing meetings, answering slack channel conversations with our team members and customers, helping in driving the major Apache Spark issues at an individual contributor level.
  • Build an internal wiki, knowledge base with technical documentation, manuals for the support team and for the customers. Help create company documentation and knowledge base articles.
  • Coordinate with Engineering and Backline Support teams to help report product defects.
  • Participate in weekend and weekday on-call rotation and run escalations during Databricks runtime outages, incident situations, and plan day-to-day activities and provide escalated level of support for important customer operational issues.

What we look for:

  • 3 years of hands-on experience developing any two or more of the Big Data, Hadoop, Apache Spark, Machine Learning, Artificial Intelligence, Streaming, Kafka, Data Science, ElasticSearch related industry use cases at the production scale. Spark experience is mandatory.
  • Experience in the performance tuning/troubleshooting of Hive and Apache Spark-based applications at production scale.
  • Real-time experience in JVM and Memory Management techniques such as Garbage collections, Heap/Thread Dump Analysis.
  • Experience with any SQL-based databases, Data Warehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL Server, MySQL and SCD type use cases.
  • Experience with AWS or Azure or GCP.

#J-18808-Ljbffr


Salário:

A combinar


Benefícios:

indefinido


Algumas dicas:

Sempre comparecer à entrevista de emprego com um currículo atualizado!

Nunca pague nenhum valor, compre cursos ou serviços que prometam participação em seleção ou contratação.

Não informe dados bancários ou pessoais por e-mail ou através de sites que não conheça.



CANDIDATE-SE

Print