Role: Senior Backend Engineer / Architect with more than 6 years of experience.
Architect and backend engineer of 10+ successful, large scale projects. Built distributed systems consisting of data crawling, pipelining, aggregation, deduplication etc. Author and maintainer of open-source library that integrates Django with Google Cloud Tasks for push-type queues.
In-depth knowledge of Django and all of related things. DRF, auth, query/ORM optimization, ES integration, GeoDjango, multi-db, advanced knowledge of Postgres and knowledge of clean Django integration of all of its features that are absent (PostGIS, materialized views, partial indexes, CTEs, PL/pgSQL, inheritance, JSONB, etc.) etc.
Integrated and built ES indices with automatic Django <-> ES updates using distributed queues with large amounts of data.
Advanced knowledge of Scrapy. Built dozens of distributed crawlers and subsequent data pipelines with various difficulty levels of target crawlability / data cleanliness.
10+ years of experience with *nix systems.
Stack: Python, Django, DRF, Elasticsearch (ELK stack in general - Kibana, APM, Logstash), Scrapy, Postgres, PostGIS, Celery, Spacy, Google Cloud Platform, Docker, Kubernetes, AWS.
Bonus trait: +30 agility points as I'm using a tiling window manager.
Role: Senior Backend Engineer / Architect with more than 6 years of experience.
Architect and backend engineer of 10+ successful, large scale projects. Built distributed systems consisting of data crawling, pipelining, aggregation, deduplication etc. Author and maintainer of open-source library that integrates Django with Google Cloud Tasks for push-type queues.
In-depth knowledge of Django and all of related things. DRF, auth, query/ORM optimization, ES integration, GeoDjango, multi-db, advanced knowledge of Postgres and knowledge of clean Django integration of all of its features that are absent (PostGIS, materialized views, partial indexes, CTEs, PL/pgSQL, inheritance, JSONB, etc.) etc.
Integrated and built ES indices with automatic Django <-> ES updates using distributed queues with large amounts of data.
Advanced knowledge of Scrapy. Built dozens of distributed crawlers and subsequent data pipelines with various difficulty levels of target crawlability / data cleanliness.
10+ years of experience with *nix systems.
Stack: Python, Django, DRF, Elasticsearch (ELK stack in general - Kibana, APM, Logstash), Scrapy, Postgres, PostGIS, Celery, Spacy, Google Cloud Platform, Docker, Kubernetes, AWS.
Bonus trait: +30 agility points as I'm using a tiling window manager.
Resume: https://goo.gl/9Gx7BP
Email / Phone: In the resume.