I think graph is a wrong abstraction for building AI agents. Just look at how incredibly hard it is to make routing using LangGraph - conditional edges are a mess.I built Laminar Flow to solve a common frustration with traditional workflow engines - the rigid need to predefine all node connections. Instead of static DAGs, Flow uses a dynamic task queue system that lets workflows evolve at runtime.Flow is built on 3 core principles:* Concurrent Execution - Tasks run in parallel automatically* Dynamic Scheduling - Tasks can schedule new tasks at runtime* Smart Dependencies - Tasks can await results from previous operationsAll tasks share a thread-safe context for state management.This architecture makes it surprisingly simple to implement complex patterns like map-reduce, streaming results, cycles, and self-modifying workflows. Perfect for AI agents that need to make runtime decisions about their next actions.Flow is lightweight, elegantly written and has zero dependencies for the engine.Behind the scenes it's a ThreadPoolExecutor, which is more than enough to handle concurrent execution considering majority of AI workflows are IO bound.To make it possible to wait for the completion of previous tasks, I just added semaphore for the state value. Once the state is set, one permit is released for the semaphore.The project also comes with built-in OpenTelemetry instrumentation for debugging and state reconstruction.Give it a try here -> https://github.com/lmnr-ai/flow. Just do pip install lmnr-flow. (or uv add lmnr-flow). More examples are in the readme.Looking forward to feedback from the HN community! Especially interested in hearing about your use cases for dynamic workflows.Couple of things on the roadmap, so contributions are welcome!* Async function support* TS port* Some consensus on how to handle task ids when the same tasks is spawned multiple times
Users express a mix of skepticism and interest in the Show HN product, with concerns about the practicality of LLM-based agents and a desire for real-world examples. There's a call for improved developer experience and conditional flows. The rebranding of MemGPT to Letta is noted, and there are comparisons to existing tools like PrefectHQ and Temporal. Some find the tool intriguing and are eager to try it, while others question its focus and the relevance of certain features. There's also feedback on the need for concrete examples in documentation and a preference for discussion over advertising on the forum.
Users find the LangGraph setups complex and tedious, and criticize the lack of innovation in AI tooling. The graph approach is seen as incorrect, and there's a need for running code to understand it. The product lacks a working proof of concept, examples for inference or function calls, and persistent storage. It's also accused of exploiting AI hype and having unclear references. Concerns about thread safety, deadlocks, and non-determinism are mentioned, along with too abstract documentation and a need for more real-world examples.