Roboflow’s cover photo
Roboflow

Roboflow

Software Development

Used by over 1 million engineers to deploy computer vision applications.

About us

Roboflow creates software-as-a-service products to make building with computer vision easy. Over 1,000,000 developers use Roboflow to manage image data, annotate and label datasets, apply preprocessing and augmentations, convert annotation file formats, train a computer vision model in one-click, and deploy models via API or to the edge. https://roboflow.com

Website
https://roboflow.com
Industry
Software Development
Company size
51-200 employees
Headquarters
Remote
Type
Privately Held

Locations

Employees at Roboflow

Updates

  • AI is moving so fast. The CIO of a 1,000+ person company is using Lovable and Roboflow to automate 600 hours per month of work with vision AI in an industrial laboratory setting. Rodrigo Silva created a step by step guide for you to see how he did it: https://lnkd.in/eBepFMc7

    A Roboflow é uma plataforma de visão computacional que permite organizar bases de imagens, anotar objetos, treinar modelos e realizar deploy de inferência via API ou edge. Quero compartilhar nosso artigo publicado no blog da Roboflow, que apresenta um case real de automação da contagem de conídios com o uso de visão computacional. No artigo, mostramos como uma atividade manual, repetitiva e sujeita a variações foi transformada em um processo digital, padronizado e muito mais escalável, com o apoio do Roboflow e do Lovable para otimizar a rotina laboratorial e tornar a tomada de decisão mais ágil. Os resultados vão além da eficiência operacional. O case destaca ganhos em padronização, rastreabilidade, redução da subjetividade e liberação das equipes para atividades de maior valor agregado. Para profissionais que atuam com qualidade, laboratório, indústria, inovação, dados ou inteligência artificial aplicada aos negócios, a leitura é bastante recomendada. https://lnkd.in/dEwYBZFk

    • No alternative text description for this image
  • Roboflow reposted this

    View profile for Sylvie Goldner

    Roboflow3K followers

    Physical AI improves quality control. We've all seen the news stories... Pfizer recalled 1M packs of birth control pills due to incorrect packaging. Friendly's shipped Cookies & Cream in Vanilla Bean packaging. Coca-Cola recalled 13,000+ cases of "Zero Sugar" lemonade that was actually full sugar. Same root cause every time: wrong label, wrong package. Computer vision catches label mismatches before a single unit leaves the line.

  • Roboflow reposted this

    View profile for Joseph Nelson

    Roboflow7K followers

    In AI, we all get by with a little help from our friend, Jensen Huang 🤝 At NVIDIA GTC this year, we'll be telling the inside story of how to work with NVIDIA to win markets like vision and physical AI. That includes how we've attracted over half the Fortune 100 in critical industries like manufacturing, supported millions of developers, and published SOTA model architectures like RF-DETR. We can't do this ourselves. Our friends at NVIDIA have been critical to the compute, inference optimization, and distribution to enable everyone to benefit from visual AI. If you miss the session, catch us at booth 1637 in the main exhibit hall. We've got live demos, consultation with engineers, and swag (+treats) till supplies run out. Alyss Noland #VisionAI #PhysicalAI #NVIDIAGTC

    • No alternative text description for this image
  • Physical and embodied AI are the next big wave. This webinar covers how robotics teams are using vision to get an edge over traditional LIDAR-centric navigation. LIDAR has been the gold standard for years. It’s reliable, it’s functional, and it gets a robot from Point A to Point B. But as global supply chains become more complex and our facilities become more crowded, visual understanding is required to create truly autonomous systems. Vision AI unlocks autonomy because when a robot can see, it understands dynamic environments. This shift is powering the next generation of factory automation. Moving robots out of structured environments and onto the floor alongside human workers. Join the live session to learn more about vision enabled robots.

    Next week, join me and Vishrut Kaushik of Peer Robotics for a live conversation about integrating computer vision with industrial robots. We'll check out their automated movement systems, how visual intelligence unlocks additional capabilities for them, and lessons learned by Vishrut's team along the way. Register here: https://luma.com/kj3h0mwv

  • RF-DETR unlocking human motion systems thanks to Apache 2.0 license and improved performance over YOLO models Incredible open repo in this post from Saif K. for you to get started with and build on top of. Open source for the win

    View profile for Saif K.

    Human motion tracking systems typically follow a top-down pipeline (detect → crop → estimate). In practice, especially in lab setups or tools like #freemocap and #pose2sim, this means a person detector runs independently, and the pose model operates on fixed-resolution crops. There is a licensing problem here. Since YOLOv5, Ultralytics models use AGPL-3.0. If you build proprietary commercial software (e.g., motion tracking SaaS), you must either open-source your system or purchase an enterprise license. For that reason, many open-source pipelines (#rtmlib, #sports2d, #pose2sim) still rely on YOLOX (2021), the last Apache-2.0 YOLO variant. After ~1.5 years of using YOLOX in motion tracking setups, I’ve found it poorly suited for high-quality lab tracking. It is highly sensitive to object orientation and produces significant frame-to-frame box jitter, even for nearly static subjects. That instability propagates to pose outputs. You can smooth it offline or add causal filters online, but then you introduce lag. For real-time use cases (e.g., VR animation), that trade-off is undesirable. A better alternative is now available: RF-DETR (ICLR 2026, Apache-2.0) by Roboflow. In my side-by-side comparisons against YOLOX, it is noticeably more stable in low-motion scenes, with far less bounding box wobble. It also avoids NMS, eliminating manual tuning and associated false positives. While YOLOX can be faster, detector stability often matters more for downstream pose quality than raw FPS. To make adoption easier, I built #OpenDetect: a minimal wrapper around RF-DETR (and YOLOX) using ONNX Runtime, with CUDA, TensorRT, and Apple acceleration supported out of the box. While I focus on person class for pose estimation, detection for all COCO classes is supported. Apache-2.0 license, free for commerical use. No obligation to make your code public. 🚀 GitHub: https://lnkd.in/dFwwg4e5 🤓 Docs: https://lnkd.in/d67qCQKj If your subject is mostly static and your keypoints still jitter, improve the detector first. A stable detector can clean up your entire pose pipeline without modifying the pose model. #computervision #poseestimation #opensource

  • RF-DETR + SAHI + ByteTrack Great combo for tracking small objects

    Traffic flow analytics from 20-pixel objects. In this clip the vehicles are roughly 10–20 pixels wide in a 1080p frame. At that scale, small misses in detection quickly turn into broken tracks and noisy analytics. I ran RF-DETR for detection, added SAHI tiling to recover small objects, then used ByteTrack for ID consistency. The heatmap is built from aggregated trajectories over time. The interesting part was how image tiling affected downstream tracking stability/heatmap accuracy. Fewer missed detections -> fewer "broken tracks" -> traffic flow heatmap noticeably cleaner. Built inside Roboflow 's Workflows. The compute overhead is the main downside. Anyone testing higher-res models for real-time detections? #ComputerVision #ObjectDetection #MachineLearning #VideoAnalytics #DeepLearning

  • Most "AI Trends" are just people guessing. We decided to look at the data instead. We analyzed 200,000 real-world vision AI projects to see what’s actually happening on the ground. This is the first large-scale analysis of its kind, offering a look at the reality of production-level vision AI that you won't find anywhere else. If you want to know what the top vision AI teams are doing, read this: https://lnkd.in/eGcaEd6P The biggest mistake in AI right now is following the hype instead of the ROI. You'll see how teams are moving beyond pilot projects to put AI into production across 10 global industries: healthcare & medicine, industrial manufacturing, agriculture, transportation, warehousing & logistics, energy & utilities, retail, consumer goods, media & entertainment, and automotive.

  • If you’ve ever struggled with keeping track of objects when they cross paths, this is for you. Trackers 2.1.0 is out and we’ve officially added ByteTrack support. Why ByteTrack? Most trackers are great when objects are moving solo, but in the real world things overlap. This is usually where tracking algorithms break and identities get swapped. ByteTrack is specifically designed to handle that occlusion making it arguably the best choice for tracking-by-detection right now. On top of that, we paired it with RF-DETR 1.4.0, which now includes segmentation support that is currently SOTA. All entirely open source and Apache 2.0.

    RF-DETR Segmentation + ByteTrack 🔥 🔥 🔥 Last week we released RF-DETR 1.4.0 with segmentation support. New state-of-the-art pre-trained checkpoints: N, S, M, L, XL, and 2XL. RF-DETR Segmentation beat the recently released YOLO26. Today we released Trackers 2.1.0 with ByteTrack support. A fast tracking-by-detection algorithm focused on stable identities under occlusion. All of this fully open-source, under Apache 2.0 license. Check out this demo combining both models. And leave a GitHub star under both projects! 🙏🏻 - RF-DETR: https://lnkd.in/dVQRpvWU - Trackers: https://lnkd.in/dy6tSiS8 Big thanks to the Roboflow models and open-source team, in particular Peter Robicheaux, Isaac Robinson and Matvei Popov who gave us RF-DETR Segmentation, Tomasz Stańczyk and Alexander Bodner who gave us ByteTrack and Jiri Borovec, who together with me pushes everything forward! Thanks also to the remaining contributors Kai Christensen, Anuj Khandelwal, Bruno Cardoso and James G, who delivered other features included in these releases. #opensource #programming #objectdetection #instancesegmentation #tracking

Similar pages

Browse jobs

Funding

Roboflow 6 total rounds

Last Round

Series B

US$ 40.0M

See more info on crunchbase