Fluxer is an independent, open-source instant messaging and VoIP platform designed as a community-first alternative to Discord. Built with privacy as a core principle, Fluxer offers no ads, never sells user data, and is fully funded by the community rather than investors. The platform is completely open-source under the AGPLv3 license, allowing users to self-host their own instances while using official Fluxer clients. Currently in public beta, Fluxer has been featured on Product Hunt and offers a compelling combination of familiar chat functionality with transparency and user ownership.
Fluxer makes it easy to start communicating with friends, groups, and communities through an intuitive interface similar to Discord.
Getting Started:
- Visit fluxer.app and click "Download" or "Open in Browser" to use the web app
- Create an account or try the platform without an email at fluxer.gg/fluxer-hq
- Set up your profile with a custom avatar and display name
- Create or join communities (called "servers") using invite links
- Start messaging in text channels or join voice channels for real-time communication
- Customize your experience with themes, compact mode, and notification settings
Instant Messaging
Send private messages, chat in groups, or build communities with organized text channels supporting full Markdown formatting.
Voice & Video Calling
One-click voice and video calls with built-in screen sharing, noise suppression, echo cancellation, and multi-device support.
Open Source (AGPLv3)
Fully transparent codebase available on GitHub, allowing community contributions, audits, and self-hosted deployments.
Self-Hosting Support
Run your own Fluxer server on your hardware while using official desktop and mobile clients to connect.
Community Management Tools
Granular roles and permissions, moderation tools, transparent audit logs, and webhook/bot integration for running communities.
Customization
Upload custom emoji and stickers, create CSS themes, save media for later, and personalize display options.
Cross-Platform
Available on macOS, Windows, Linux, iOS, Android, and web browsers with seamless sync across devices.
#1 ML Engineers & Researchers
Ideal for machine learning practitioners who need maximum inference speed from HuggingFace models without manual kernel optimization.
#2 AI Infrastructure Teams
Perfect for teams deploying large-scale AI models who need to reduce GPU costs through optimized kernel performance.
#3 PyTorch Model Developers
Great for developers seeking to accelerate custom PyTorch models beyond what torch.compile can achieve.
#4 Production ML Systems
Essential for companies running inference at scale who need every millisecond of latency reduction.
#5 GPU Performance Optimization
Valuable for anyone working with NVIDIA hardware who wants to maximize tensor core utilization and memory efficiency.
What models does Forge CLI support?
Forge CLI works with any HuggingFace model ID or PyTorch model, automatically generating optimized CUDA/Triton kernels for each layer.
How much faster is Forge compared to torch.compile?
Forge achieves up to 5x faster inference than torch.compile(mode='max-autotune') while maintaining 97.6% correctness.
Does Forge work with AMD GPUs?
Currently, Forge CLI focuses on NVIDIA hardware with CUDA/Triton optimization. AMD ROCm support may be added in future updates.
What is the performance guarantee?
RightNow AI offers a full refund if Forge-optimized kernels don't outperform torch.compile benchmarks on your models.
How does the swarm optimization work?
Forge runs 32 parallel Coder+Judge agent pairs that compete to find the fastest kernel implementation, exploring strategies like tensor core utilization and kernel fusion.