Skip to main content

Uses

A curated list of tools, technologies, and hardware I use for AI engineering, distributed systems development, and high-performance backend applications.

AI & Data Engineering

  • PyTorch with Lightning for rapid model development. Particularly useful for quick prototyping and production-ready AI systems.
  • For data warehousing, I work with Amazon Redshift and Snowflake for large-scale analytics.Apache Spark for distributed data processing.
  • Subscription systems built with Stripe for billing, Apache Kafka for event streaming, and custom Go microservices for high-performance processing.
  • Databricks for unified analytics, dbt for data transformations, and dbt Semantic Layer for metrics.

Development

  • VS Code with Windsurf for AI development. Essential extensions: GitHub Copilot, Python, Go, Docker, and Thunder Client for API testing.
  • GoLand for complex Go projects, especially when working with large-scale microservices and performance-critical code.
  • High-performance web services in Go using Gin and Echo. Custom middleware for rate limiting, caching, and observability.
  • Database optimization with PostgreSQL (partitioning, indexing strategies), MongoDB for document storage, and Redis for caching and real-time features.
  • Cloud-native development with Kubernetes, Docker for containerization, and Terraform for infrastructure management.
  • Observability stack: OpenTelemetry for instrumentation, Grafana for visualization, and Prometheus for metrics collection.

LLM & RAG Development

  • LangChain for RAG pipelines with custom Go services for high-performance retrieval.Cohere Go SDK for embedding generation.
  • Vector stores: Pinecone for managed deployments, Qdrant for self-hosted with high QPS requirements, and ChromaDB for rapid prototyping.
  • OpenAI and Anthropic SDKs for LLM integration. Custom caching and rate limiting middleware in Go.
  • Document processing with Unstructured for parsing, custom Go services for high-throughput preprocessing, and Redis for caching embeddings.
  • Monitoring with LangSmith for tracing, custom Prometheus metrics for performance, and RAGAS for RAG evaluation.

Workspace

WorkstationCustom built AMD Threadripper, 128GB RAM, RTX 4090 - Perfect for training and fine-tuning models
Operating SystemLinux (Pop!_OS) - Perfect balance of stability and customization
TerminalAlacritty with tmux and Oh My Zsh
MonitorsDual 4K 32" Dell U3219Q + LG 34" Ultrawide
KeyboardKeychron K8 Pro (Red switch)
MouseLogitech MX Master 3
LaptopMacbook Pro M4 14"
Cloud PlatformsAWS (Primary), GCP for AI/ML workloads
HeadphonesAudio Technica ATH-M50x/Apple Airpods
MicrophoneBlue Yeti