You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PostgreSQL-native semantic search engine with multi-modal capabilities. Add AI-powered search to your existing database without separate vector databases, vendor fees, or complex setup. Features text + image search using CLIP embeddings, native SQL joins, and 10-minute Docker deployment.
This project is a real-time multimodal recommendation system built on top of Reddit data. It processes image-caption pairs using CLIP to create joint embeddings, stores them in Qdrant, and supports semantic retrieval based on text or image input.
This repo contains an Integrated Framework for Cross-Border Digital Marketing Automation that Establish an AI-driven system that integrates multimodal content generation, dynamic cross-platform allocation, and ROI prediction to address inefficiencies in multilingual creative production and delayed strategy adaptation.
A distributed system for large-scale image data processing with CLIP embeddings and FAISS indexing. Built on a five-node AlmaLinux cluster with SLURM, Ansible, and NFS. Supports modular embedding, FAISS shard merging, and capacity benchmarking with Prometheus.