Data Infrastructure, aka "Big Fast Data," manages data about our customers' customers, including ingestion, validation, storage, and search. We are a back-end focused team, dealing with the nuts and bolts of data storage, streaming, and search. We own the ingestion pipeline receiving data through API endpoints or file uploads, cleansing that data, and storing it in Elasticsearch. We also deal heavily with streaming data between systems, using a mixture of RabbitMQ, Kafka, and Pulsar.
You can read about our ingestion architecture and get a feel for the work we do in the blog post Scaling Data Ingestion with Akka Streams and Kafka.