nemo1618 a day ago

I see two downsides. Looking at this snippet:

    my_function (): Unit can AllErrors =
      x = LibraryA.foo ()
      y = LibraryB.bar ()
The first thing to note is that there is no indication that foo or bar can fail. You have to lookup their type signature (or at least hover over them in your IDE) to discover that these calls might invoke an error handler.

The second thing to note is that, once you ascertain that foo and bar can fail, how do you find the code that will run when they do fail? You would have to traverse the callstack upwards until you find a 'with' expression, then descend into the handler. And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

I do think this is a really neat concept, but I have major reservations about the readability/debuggability of the resulting code.

  • cryptonector a day ago

    > how do you find the code that will run when they do fail?

    That's part of the point: it's dynamic code injection. You can use shallow- or deep-binding strategies for implementing this just as with any dynamic feature. Dynamic means just that bindings are introduced by call frames of callers or callers of callers, etc., so yes, notionally you have to traverse the stack.

    > And this cannot be done statically (i.e. your IDE can't jump to the definition),

    Correct, because this is a _dynamic_ feature.

    However, you are expected not to care. Why? Because you're writing pure code but for the effects it invokes, but those effects could be pure or impure depending on context. Thus your code can be used in prod and hooked up to a mock for testing, where the mock simply interposes effects other than real IO effects.

    It's just dependency injection.

    You can do this with plain old monads too you know, and that's a much more static feature, but you still need to look way up the call stack to find where _the_ monad you're using might actually be instantiated.

    In other words, you get some benefits from these techniques, but you also pay a price. And the price and the benefit are two sides of the same coin: you get to do code injection that lets you do testing and sandboxing, but it becomes less obvious what might be going on.

    • Voultapher 5 hours ago

      > However, you are expected not to care. Why? Because you're writing pure code but for the effects it invokes, but those effects could be pure or impure depending on context.

      Does that work out in practice? Genuinely curious if anyone has experience with such systems at scale or in legacy applications where the original authors left long ago. I'm skeptical because in my experience not everything is or can be designed perfectly pure without abstraction leakage. At some point you need to understand all the behavior of a certain sub-system and the less clever the design is the easier that becomes in my experience.

  • zvrba a day ago

    > [...] how do you find the code that will run when they do fail? You would have to traverse [...]

    I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation; some even do it for DTOs. "Go to implementation" of a method, and you end up in the interface's declaration so you have to jump through additional hoops to get to it. And you're out of luck when the implementation is in another assembly. The IDE _could_ decompile it if it were a direct reference, but it can't find it for you. When you're out of luck, you have to debug and step into it.

    But this brings me to dependency injection containers. More powerful ones (e.g., Autofac) can establish hierarchical scopes, where new scopes can (re)define registrations; similar to LISP's dynamically scoped variables. What a service resolves to at run-time depends on the current DI scope hierarchy.

    Which brings me to the point: I've realized that effects can be simulated to some degree by injecting an instance of `ISomeEffectHandler` into a class/method and invoking methods on it to cause the effect. How the effect is handled is determined by the current DI registration of `ISomeEffectHandler`, which can be varied dynamically throughout the program.

    So instead of writing

        void DoSomething(...) {
            throw SomeException(...);
        }
    
    you establish an error protocol through interface `IErrorConditions` and write

        void DoSomething(IErrorConditions ec, ...) {
            ec.Report(...);
        }
    
    (Alternately, inject it as a class member.) Now, the currently installed implementation of `IErrorConditions` can throw, log, or whatever. I haven't fully pursued this line of though with stuff like `yield`.
    • SkiFire13 a day ago

      > I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation

      I work on a Java backend that is similar to what you're describing, but Intellij IDEA is smart enough to notice there is exactly one non-test implementation and bring me to its source code.

      • vrighter a day ago

        not that familiar with java, but in .net when you do this, it is very common for the implementation to be in a separate assembly, part of a different project

        • cweld510 a day ago

          Doesn’t that imply an interface is necessary though, so you can compile (and potentially release) the components separately? I don’t use .net but this sounds quite similar to pulling things into separate crates in Rust or different compilation units in C, which is frequently good practice.

          • mystifyingpoi 19 hours ago

            Definitely that could imply the necessity of an interface, but often it's simply done, because everyone working in a project blindly follows an already established poor convention.

            • vrighter 7 hours ago

              by "it's common in the .Net world" I mean that it seems to be an antipattern that's blindly followed. If there's only ever one implementation, it is the interface, imo

    • deergomoo 16 hours ago

      > you end up in the interface's declaration so you have to jump through additional hoops to get to it

      Bit of a tangent, but this really annoys me when I work on TypeScript (which isn't all that often, so maybe there's some trick I'm missing)—clicking through to check out the definition of a library function very often just takes me to a .d.ts file full of type definitions, even if the library is written in TypeScript to begin with.

      In an ideal world I probably shouldn't really need to care how a library function is implemented, but the world is far from ideal.

    • jiggawatts a day ago

      The annoyance is that the .NET standard library already does this precise thing, but haphazardly and in far fewer places than ideal.

      ILogger and IProgress<T> comes to mind immediately, but IMemoryCache too if you squint at it. It literally just "sets" and "gets" a dictionary of values, which makes it a "state" effect. TimeProvider might be considered an algebraic effect also.

  • abathologist a day ago

    > The first thing to note is that there is no indication that foo or bar can fail

    I think this is a part of the point: we are able to simply write direct style, and not worry at all about the effectual context.

    > how do you find the code that will run when they do fail

    AFAIU, this is also the point: you are able to abstract away from any particular implementation of how the effects are handled. The code that will when they fail is determined later, whenever you decide how you want to run it. Just as, in `f : g:(A -> B) -> t(A) -> B` there is no way to find "the" code that will run when `g` is executed, because we are abstracting over any particular implementation of `g`.

    • skybrian a day ago

      In languages like JavaScript, function calls that can throw are completely indistinguishable. In Go, calling a function that can fail is explicit and takes three lines of boilerplate, if you just want to propagate the error. That seems like too much. Rust has the ‘?’ operator, which is one character of boilerplate.

      Though it does add noise, one character of boilerplate to indicate a function call that uses effects seems like the right amount? Functions that use lots of effects will likely have this character on every function call, but that seems like a good indicator that it’s tricky code.

      • empath75 21 hours ago

        Rust and Go errors are not really effectful in the "side effect" sense, they're just an ordinary return value of a function. There's really no difference between returning a result and any other enum. Otoh, panics are effectful, and that's the kind of thing you'd want to capture in an effect system.

        • skybrian 18 hours ago

          Sure, but in a language with an effect system, it seems like effects would be used for errors, so it seems worth comparing error-handling techniques.

          Go uses a single base type (interface) for “expected” errors and panics for errors that aren’t normally caught. I suppose those would be two different effects? For the “expected” errors, some kind of annotation on the function call seems useful.

          • aatd86 17 hours ago

            The way I see it, effects would be implemented/assigned to the function where the error gets logged for instance. But as long as the error value remains local, a function can still be pure all else being equal.

            This is not the case with exception looking code (aka panics) since it escapes normal control flow and I guess makes an error "punch" through stacks as they are unwinded.

            A bit like being thrown toward global state. So if we consider panics as side effectful operations, we would have to change go to explicitly declare panics as side effects with a known function signature.

            I guess one would want the list of side effects to be part of a function signature for full visibility. I wonder how that would influence backward compatibility.

    • nine_k a day ago

      It looks like exceptions (write the happy path in direct style, etc), but with exceptions, there is a `catch`. You can look for it and see the alternate path.

      What might be a good way to find / navigate to the effectual context quickly? Should we just expect an IDE / LSP color it differently, or something?

      • MrJohz a day ago

        There's a `catch` with effects as well, though, the effect handler. And it works very similarly to `catch` in that it's not local to the function, but happens somewhere in the calling code. So if you're looking at a function and you want to know how that function's exceptions get handled, you need to look at the calling code.

        • naasking 21 hours ago

          This is the case anytime errors are propagated to callers instead of handled locally, which is probably most cases.

          • MrJohz 7 hours ago

            Just to clarify because the grammar is a bit ambiguous in your comment: which case do you see as the common case? I suspect in most cases, people don't handle errors where they are thrown, but rather a couple of layers up from that point.

            • naasking 3 hours ago

              > I suspect in most cases, people don't handle errors where they are thrown, but rather a couple of layers up from that

              Agreed, that's the point I was trying to make.

  • MrJohz a day ago

    > And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

    I believe this can be done statically (that's one of the key points of algebraic effects). It works work essentially the same as "jump to caller", where your ide would give you a selection of options, and you can find which caller/handler is the one you're interested in.

    • HelloNurse 19 hours ago

      This suggests a novel plausible IDE feature: list callers of a function, but only those with a handler for a certain effect (maybe when the context sensitive command is invoked on the name of one of the effects rather than on the name of the function).

      • codethief 11 hours ago

        What do you mean by "only those"? Each caller of an effectful function will have to handle the effect somehow, possibly by propagating it further up the chain. Is that what you meant, an IDE feature to jump to the ((grand-)grand-)parent callers defining the effect handler?

  • wavemode a day ago

    > there is no indication that foo or bar can fail

    Sounds like you're just criticizing try-catch style error handling, rather than criticizing algebraic effects specifically.

    Which, I mean, is perfectly fair to not like this sort of error handling (lack of callsite indication that an exception can be raised). But it's not really a step backward from a vast majority of programming languages. And there are some definite upsides to it as well.

    • skybrian a day ago

      Since effects are powerful enough to implement generators and cooperative multitasking, it seems like it’s more than just exceptions? Calling some functions could task-switch and do arbitrary computation before returning at some arbitrary time later. It might be nice to know which function calls could do that.

      I’m not a fan of how ‘await’ works in JavaScript because accidentally leaving it out causes subtle bugs. But the basic idea that some function calls are simple and return immediately and others are not makes sense.

      • brokencode 20 hours ago

        Even regular sync functions in JavaScript can do lots of computations that take a long time. And others perform effects like starting a timer or manipulating the DOM. Should these be indicated with keywords too?

        I agree that await is a nice hint in the code that something more substantial is happening, but ultimately it’s limited and quite opaque.

        Great IDE support for algebraic effects would probably include inline hints to show effects.

        • skybrian 18 hours ago

          In a language with an effect system, presumably modifying the DOM would be an effect.

          Maybe some effects should have call sites annotated, like ‘async’, and others should not, like DOM manipulation? It seems like an API design issue. What do you want to call out for extra attention?

  • YmiYugy a day ago

    I think the readability problem can be solved by having your LSP tell your editor to display some virtual text, indicating that the foo and bar calls might error.

    I have to admit I don't understand the second point. If you could statically determine from the definition of foo and bar what code handles their errors, than there would be no reason for foo or bar to error, they could just call the error handling code. If foo and bar return Result sum types and my_function just passes those errors up, it would be no different. You don't know what callers of my_function would do with those errors.

  • tome a day ago

    > no indication that foo or bar can fail ... how do you find the code that will run when they do fail

    If that's what you're looking for you might want to try my Haskell effects library Bluefin (it's not an "algebraic" effects library, though). The equivalent code would be

        myFunction :: e :> es -> Exception String e -> Eff es r
        myFunction ex = do
          x <- LibraryA.foo ex
          y <- LibraryB.foo ex
          z <- LibraryC.foo
          ...
    
    This answers the first part of your question: the presence of `ex` argument (an `Exception String` handle) shows that a String-value exception can be thrown wherever they are used. For example, we know that `LibraryC.foo` does not throw that exception.

    It also answers the second part of your question: the code that runs on failure is exactly the code that created the `Exception String` handle. Any exception arising from that handle is always caught where the handle was created, and nowhere else. For example, it could be here:

        try $ \ex -> do
            v <- myFunction ex
            ...
    
    `try` catches the exception and turns into into the `Left` branch of a Haskell `Either` type. Or it could be here:

        myFunction :: e :> es -> Exception String e -> Eff es r
        myFunction ex = do
          catch
            (\ex2 -> do
              x <- LibraryA.foo ex
              y <- LibraryB.foo ex2
              z <- LibraryC.foo
              ...)
            (\errMsg -> logErr errMsg)
    
    So the exception thrown by `LibraryB.foo` is always handled with the `logErr` (and nowhere else), and the exception thrown by `LibraryA.foo` is always handled by the exception handler higher up which created `ex` (and nowhere else).

    Let me know what you think!

    • itishappy 21 hours ago

      Looks slick!

      I'm a bit confused at the distinction of "algebraic" vs "non-algebraic" effects. Think you can give a brief example of what you mean?

      Also, I know you've taken a slightly different direction with Bluefin than other Haskell effects libraries (effects as values vs types). Is this related to the distinction above?

      • tome 16 hours ago

        > I'm a bit confused at the distinction of "algebraic" vs "non-algebraic" effects. Think you can give a brief example of what you mean?

        I don't fully understand what exactly "algebraic" effects are, but I think they're something like the entirety of effects that can safely be implemented with delimited continuations. Since Bluefin doesn't use delimited continuations (just the traditional standard GHC RTS effects: exceptions, state, IO, threads) it's not "algebraic".

        > Also, I know you've taken a slightly different direction with Bluefin than other Haskell effects libraries (effects as values vs types). Is this related to the distinction above?

        No, it's an orthogonal axis. Here's an experimental Bluefin-style API over delimited continuations (i.e. supports all algebraic effects): https://hackage.haskell.org/package/bluefin-algae

  • suspended_state a day ago

    > The first thing to note is that there is no indication that foo or bar can fail.

    I don't see how this is different from traditional programming.

    > I do think this is a really neat concept, but I have major reservations about the readability/debuggability of the resulting code.

    I don't think that readability will be harmed, but I can understand your concerns about debugging. I feel that cluttering the code with consistency/error checks everywhere actually harms much more readability.

  • throw10920 9 hours ago

    > The first thing to note is that there is no indication that foo or bar can fail. You have to lookup their type signature (or at least hover over them in your IDE) to discover that these calls might invoke an error handler.

    This is not a meaningful drawback. It's 2025. IDEs have existed for almost half of a century. Crippling programming languages because you want to shove everything into the source code (in a text-based format no less) is insane. Let's actually try to move programming into the 21st century, please.

    > And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

    This is wrong. Your IDE can find call sites, build a call graph, and show you the places (parents in the call graph) where an error might be handled for a particular function.

AdieuToLogic a day ago

> You can think of algebraic effects essentially as exceptions that you can resume.

How is this substantively different than using an ApplicativeError or MonadError[0] type class?

> You can “throw” an effect by calling the function, and the function you’re in must declare it can use that effect similar to checked exceptions ...

This would be the declared error type in one of the above type classes along with its `raiseError` method.

> And you can “catch” effects with a handle expression (think of these as try/catch expressions)

That is literally what these type classes provide, with a "handle expression" using `handleError` or `handleErrorWith` (depending on need).

> Algebraic effects1 (a.k.a. effect handlers) are a very useful up-and-coming feature that I personally think will see a huge surge in popularity in the programming languages of tomorrow.

Not only will "algebraic effects" have popularity "in the programming languages of tomorrow", they actually enjoy popularity in programming languages today.

https://typelevel.org/cats/typeclasses/applicativemonaderror...

  • SkiFire13 a day ago

    > How is this substantively different than using an ApplicativeError or MonadError[0] type class?

    If you're limiting yourself to just a single effect there's probably not much difference, however once you have multiple effects at the same time then explicit support for them starts to become nicer than nesting monads (which requires picking an order and sometimes reorder them due to the output of some functions not matching the exact set or order of monads used by the calling function).

    • tome a day ago

      > nesting monads (which requires picking an order and sometimes reorder them due to the output of some functions not matching the exact set or order of monads used by the calling function).

      mtl-style (which is where `MonadError` comes from in Haskell), is exactly to defer picking an order, and indeed a handler, until handling time. (I gather the GP was talking about something in Scala, but I guess it's the same.)

      • gylterud 14 hours ago

        In Haskell, I see mtl and algebraic effects (say freer-simple) as giving you the same kind of expressiveness. The difference to me is that for mtl you need to figure out and abstract a new type class for every kind of effect and then write n^2 instances. While the freer monad construction needs only a single data type (often a GADT) and some glue function (calling send on constructors of said data type), and you are off to the races.

        The algebraic reason for this is that effects are combined with sum, which is commutative up to isomorphism. While transformers are not naturally commutative, so mtl must write all the commuters as instances.

        This, along with the reinterpret functions means that you can quickly spin up custom effects for your program, which do exactly what you need to express your program logic. Then all the glue coddle to make your program interact with the real world becomes a series of handlers, usually refining in several steps until you reach IO.

        When I have used mtl, I end up only using the standard monad classes, and then I have to remember the semantics of each one in my domain.

  • davery22 a day ago

    Algebraic effects are in delimited continuation territory, operating on the program stack. No amount of monad shenanigans is going to allow you to immediately jump to an effect handler 5 levels up the call stack, update some local variables in that stack frame, and then jump back to execution at the same point 5 levels down.

    • tome a day ago

      Quite the opposite, that's exactly what continuation monads do, for example `ContT`, and more structured versions such as `freer`. Those essentially simulate a stack rather than using the actual RTS stack. For the latter there are `eff` and `bluefin-algae` (the latter very much work in progress). So yes, in Haskell at least, monads are the right API for deli meted continuations.

      https://www.stackage.org/haddock/lts-23.15/transformers-0.6....

      https://hackage.haskell.org/package/freer-0.2.4.1

      https://github.com/lexi-lambda/eff

      https://hackage.haskell.org/package/bluefin-algae

    • grg0 a day ago

      That sounds like a fucking nightmare to debug. Like goto, but you don't even need to name a label.

      • cryptonector a day ago

        > Like goto, but you don't even need to name a label.

        That's what exceptions are.

        But effects don't cause you to see huge stack traces in errors because the whole point is that you provide the effect and values expected and the code goes on running.

      • agumonkey a day ago

        clos condition system is said to be just that, people seem to like it

        also this kind of non local stack/tree rebinding is one way to implement prolog i believe

      • vkazanov a day ago

        Well, you test the fact that the handler receives the right kind of data, and then how it processes it.

        And it is useful to be able to provide these handlers in tests.

        Effects are AMAZING

  • cryptonector a day ago

    > How is this substantively different than using an ApplicativeError or MonadError[0] type class?

    It think it's about static vs. dynamic behavior.

    In monadic programming you have to implement all the relevant methods in your monad, but with effects you can dynamically install effects handlers wherever you need to override whatever the currently in-effect handler would be.

    I could see the combination of the two systems being useful. For example you could use a bespoke IO-compatible monad for testing and sandboxing, and still have effects handlers below which.. can still only invoke your IO-like monad.

    • tome a day ago

      > with effects you can dynamically install effects handlers wherever you need to override whatever the currently in-effect handler would be.

      You can do that with mtl-style too. It's just more clumsy.

  • tel 20 hours ago

    They're pretty similar, but with different ergonomics. Algebraic effects are similar to some kind of "free" monad technique, but built in. For being built in they have nicer syntax and better composability, often. You can achieve the same in a language suitably dedicated to monadic approaches (Haskell being the poster child here) but it helps to have type class inference (giving you mtl-like composability) and built-in bind syntax a la Haskell's `do` or Scala's `for`.

    • HelloNurse 19 hours ago

      What is "mtl"?

      • tel 15 hours ago

        Sorry, Haskell’s “monad transformer library”. One of the earliest approaches to composability of multiple monadic effects. It’s pretty similar to an algebraic effect system allowing you to write effectual computations with types like `(Error m, Nondet m, WithState App m) => m ()` to indicate a computation that returns nothing but must be executed with access to error handling, nondeterminism, and access to the App type as state.

        There are a few drawbacks to it, but it is a pretty simple way to get 80% of the ergonomics of algebraic effects (in Haskell).

  • threeseed a day ago

    > they actually enjoy popularity in programming languages today

    They have enjoyed popularity amongst the Scala FP minority.

    They are not broadly popular as they come with an unacceptable amount of downsides i.e. increased complexity, difficult to debug, harder to instantly reason about, uses far more resources etc. I have built many applications using them and the ROI simply isn't there.

    It's why Odersky for example didn't just bundle it into the compiler and instead looked at how to achieve the same outcomes in a simpler and more direct way i.e. Gears, Capabilities.

  • tome a day ago

    For the MonadError in Haskell at least, it's quite similar. However, mtl-style has a number of issues that effect systems don't well explained by the author of effectful under "What about mtl?" at https://hackage.haskell.org/package/effectful.

  • anon-3988 a day ago

    I don't really get it, but is this related to delimited continuation as well?

    • tempodox a day ago
      • cryptonector a day ago

        That's just an implementation detail. I don't think there's anything about effects that _requires_ delimited continuations to implement them.

        • tome a day ago

          If you want multishot continuations then I don't really know of any way other than delimited continuations (other than undelimeted continuations or simulating delimited continuations, on the heap).

          • cryptonector 21 hours ago

            Why can't the handler be invoked as if it was called from the effect invocation site, then return?

            But apart from that and to answer your question there is an alternative to delimited continuations, and that's undelimited continuations (which essentially requires allocating call frames on the heap).

            • Rusky 20 hours ago

              The handler doesn't have to follow the pattern of "do its work, resume the computation, go away."

              It can instead do things like "do some work, resume the computation, do some more work."

              Or even more invasively, "stash the computation somewhere, return from the handler site, let the rest of the program run for a while, then resume the computation."

            • tome 16 hours ago

              > Why can't the handler be invoked as if it was called from the effect invocation site, then return?

              Then it would just be (equivalent to) a function call.

    • iamwil 14 hours ago

      Yes. Multishot resumption for algebraic effects is implemented with delimited continuations.

huqedato a day ago

This Ante "pseudocode" is wonderful! It's like Haskell with Elixir's expressiveness, flavor and practicality. A Haskell for developers. Waiting for the compiler to mature. I would love to develop apps in Ante.

michalsustr a day ago

AE (algebraic effect) are very interesting! Great article, thank you.

Reading through, I have some concerns about usability in larger projects, mainly because of "jumping around".

> Algebraic effects can also make designing cleaner APIs easier.

This is debatable. It adds a layer of indirection (which I concede is present in many real non-AE codebases).

My main concern is: When I put a breakpoint in code, how do I figure out where the object I work with was created? With explicit passing, I can go up and down the stack trace, and can find it. But with AE composition, it can be hard to find the instantiation source -- you have to jump around, leading to yo-yo problem [1].

I don't have personal experience with AE, but with python generators, which the article says they are the same (resp. AE can be used to implement generators). Working through large complex generator expressions was very tedious and error-prone in my experience.

> And we can use this to help clean up code that uses one or more context objects.

The functions involved still need to write `can Use Strings` in their signature. From practical point of view, I fail to see the difference between explicitly passing strings and adding the `can Use Strings` signature -- when you want add passing extra context to existing functions, you still need to go to all of them and add the appropriate plumbing.

---

As I understand it, AE on low level is implemented as a longjmp instruction with register handling (so you can resume). Given this, it is likely inevitable that in a code base where you have lots of AE, composing in various ways, you can get to a severe yo-yo problem, and getting really lost in what is the code doing. This is probably not so severe on a single-person project, but in larger teams where you don't have the codebase in your head, this can be huge efficiency problem.

Btw. if someone understands how AE deal with memory allocations for resuming, I'd be very interested in a good link for reading, thank you!

[1]: https://en.wikipedia.org/wiki/Yo-yo_problem

  • jfecher 9 hours ago

    > As I understand it, AE on low level is implemented as a longjmp instruction with register handling (so you can resume).

    Not quite. setjmp/lonjmp as they exist in C at least can jump up the call stack but not back down. I mention this at the end of the article but each language implements algebraic effects differently, and efficiency has improved in recent years. Languages can also optimize the effect differently based on how the handler is defined:

    - Handlers which are tail-resumptive can implement the effect as a normal closure call.

    - Handlers which don't call resume can be implemented as an exception or just return an error value at every step until the function exits the handler.

    - Handlers which perform work after resume is called (e.g. `| my_effect x -> foo (); resume (); bar ()` can be implemented with e.g. segmented call stacks.

    - Handlers where resume is called multiple times need an equivalent of a delimited continuation.

    Another way to implement these generally is to transform the effects into monads. For any set of effects you can translate it into a monad transformer where each effect is its own monad, or the free monad can be used as well. The cost in this approach is often from boxing closures passed to the bind function.

    Koka has its own approach where it translates effects to capability passing then bubbles them up to the handler (returns an error value until it gets to the handler).

    With just a few restrictions you can even specialize effects & handlers out of the program completely. This is what Effekt does.

    There really are a lot of options here. I have some links at the end of the article in the foot notes on papers for Koka and Effekt that implement the approaches above if you're interested.

  • codethief 10 hours ago

    > From practical point of view, I fail to see the difference between explicitly passing strings and adding the `can Use Strings` signature -- when you want add passing extra context to existing functions, you still need to go to all of them and add the appropriate plumbing.

    Couldn't the plumbing be reduced significantly through type inference? I.e. the function that invokes an effect will need the signature, the callers higher up the chain won't.

    Also, even if you had to adapt all the signatures, at least you wouldn't have to adjust the call sites themselves to pass in the context every single time.

    • jfecher 9 hours ago

      Right, compared to explicitly passing the parameter, with effects:

      - You wouldn't have to edit the body of the function to thread through the parameter. - The `can Use Strings` part can be inferred (and in Ante's case, it is my goal to have the compiler write in inferred types for you so that top-level definitions can be inferred if desired but still annotated when committed for code review). - Most notably, the `can Use Strings` can be included in a type alias. You could have an alias `MyEffects = can Use Strings, Throw FooError`, etc for the effects commonly used in your program. If your state type is used pervasively throughout, this could be a good option. When you have such an alias it also means you'd just be editing the alias rather than every function individually.

      Generally though, while I think the passing around of state through effects can be useful it isn't the most convincing use of effects. I mention it more for "here's another benefit they can have" rather than "here's this amazing reason you should definitely use them for"

duped 18 hours ago

I often see the claim that AE generalizes control flow, so you can (for example) implement coroutines. But the most obvious way I would implement AE in a language runtime is with coroutines, where effects are just syntactic sugar around yield/resume.

What am I missing?

  • herrington_d 17 hours ago

    One thing AE provides but not coroutines is type safety. More concretely AE can specify what a function can do and cannot do lexically in code. A generator/coroutine cannot.

    For example, a function annotated with say `query_db(): User can Datebase` means the function can call database and the caller must provide a `Database` handler to call the `query_db`.

    The constraint of what can do and not is pretty popular in other programming fields, most notably, NextJS. A server component CANNOT use client feature and a client component CANNOt access server db.

    • duped 17 hours ago

      Coroutines don't take away type safety any more than function calls do.

      But this gets back to what I was saying about generalization - the way I would implement what you're talking about is with coroutines and dynamic scoping. I'm still missing how AE is more general and not something you implement on top of other building blocks.

      • LegionMammal978 15 hours ago

        I think the idea is that you can use it like async/await, except that a function must statically declare which interfaces it is allowed to await on, and the implementations are passed in through an implicit context. I'd be a bit worried that using it widely for capabilities, etc., would just multiply the number of function colors.

        • codethief 10 hours ago

          > would just multiply the number of function colors.

          Would there really be colors?

          I mean sure, the caller of an effectful function will either have to handle the effect or become effectful itself, so in this sense effectfulness is infectious.

          However, while a function might use the `await` effect, when calling the function you could also just define the effect handler so as to block, instead of defering the task and jumping back to the event loop. In other words, wouldn't this solve the issue of colors? One would simply define all possibly blocking functions as await-effectful. Whether or not they actually await / run asynchronously would be up to the caller.

  • FjordWarden 18 hours ago

    Yes, this is what Effect-TS is doing for JavaScript, minus the syntactic sugar, but I don't know if this is a good idea in the end. It reminds me of the Spring framework, DI is also a form of AE, but it spreads like cancer through the code. The other day I was watching this talk[1] from EffectDays on how to use effects on the frontend and the entire talk was a dude showing boilerplate that did nothing. I think that AE is a beguiling idea, it lets you express imperative code in a functional language, but in a language like JS the added work of wrapping everything in a function feel like a step down from writing the simples JS you can imagine. Just as a counter example, there is also canvasmotion[2] which also uses coroutines to express scenarios for 2D graphics and this feels like it is making something hard easier.

    [1] https://www.youtube.com/watch?v=G_jp87gxILE [2] https://motioncanvas.io/

    • duped 16 hours ago

      I'm not sure I follow, JS doesn't have coroutines (generator functions can be used kind of like coroutines, but for example you can't resume them with an argument).

      • FjordWarden 16 hours ago

        Yes it does:

          function* greeter() {
            const name = yield "What's your name?"
            yield `Hello, ${name}!`
          }
          const gen = greeter()
          
          console.log(gen.next().value)
          console.log(gen.next("Alice").value)
  • LegionMammal978 15 hours ago

    Other people in this thread have claimed that AE handlers can resume the code multiple times, as in call/cc, as opposed to resuming a coroutine, which can only be done once for each time it yields.

    Personally, I don't see a whole lot of value in that, with how unpredictable execution could get. I'd rather write a function that explicitly returns another function to be called multiple times. (Or some equivalent, e.g., a function that returns an iterator.)

wild_egg a day ago

> You can think of algebraic effects essentially as exceptions that you can resume.

So conditions in Common Lisp? I do love the endless cycle of renaming old ideas

  • valcron1000 a day ago

    No, algebraic effects are a generalization that support more cases than LISP's condition system since continuations are multi-shot. The closest thing is `call/cc` from Scheme.

    Sometimes making these parallelism hurts more than not having them in the first place

    • wild_egg a day ago

      Ah multi-shot does make a big difference, thanks for clarifying!

  • riffraff a day ago

    Also literal "resumable exceptions" in Smalltalk.

  • Rusky 20 hours ago

    What a thought-terminating way to approach an idea. Effects are not simply renamed conditions, and we have a whole article here describing them in more detail than that one sentence, so you can see some of the differences for yourself.

  • ww520 a day ago

    Also dependency injection.

cdaringe a day ago

I did protohackers in ocaml 5 alpha a couple of years ago with effects. It was fun, but the toolchain was a lil clunky back then. This looks and feels very similar. Looking forward to seeing it progressing.

  • abathologist a day ago

    Effects in OCaml 5.3 are quite a bit cleaner than there were a few years back (tho still not typed).

practal a day ago

Algebraic effects seem very interesting. I have heard about this idea before, but assumed that it somehow belonged into the territory of static type systems. I am not a fan of static type systems, so I didn't look further into the idea.

But I found these two articles [1] about an earlier dynamic version of Eff (the new version is statically typed), which explains the idea nicely without introducing types or categories (well, they use "free algebra" and "unique homomorphism", just think "terms" and "evaluation" instead). I find it particularly intriguing that what Andrej Bauer describes there as "parameterised operation with generalised arity", I would just call an abstraction of shape [0, 1] (see [2]). So this might be helpful for using concepts from algebraic effects to turn abstraction algebra into a programming language.

[1] https://math.andrej.com/2010/09/27/programming-with-effects-...

[2] http://abstractionlogic.com

  • nicoty a day ago

    What's wrong with static type systems?

    • practal a day ago

      I've summarized my opinion on this here: https://doi.org/10.5281/zenodo.15118670

      In normal programming languages, I see static type systems as a necessary evil: TypeScript is better than JavaScript, as long as you don't confuse types with terms...

      But in a logic, types are superfluous: You already have a notion of truth, and types just overcomplicate things. That doesn't mean that you cannot have mathematical objects x and A → B, such that x ∈ A → B, of course. Here you can indeed use terms instead of types.

      So in a logic, I think types represent a form of premature optimisation of the language that invariants are expressed in.

      • chongli a day ago

        Your invocation of Strong AI (in the linked paper) seems like a restatement of the Sufficiently Smart Compiler [1] fallacy that’s been around forever in programming language debates. It’s hypothetical, not practical, so it doesn’t represent a solution to anything. Do you have any evidence to suggest that Strong AI is imminent?

        [1] https://wiki.c2.com/?SufficientlySmartCompiler

        • practal a day ago

          I routinely describe code that I want in natural language, and it generates correct TypeScript code for me automatically. When it gets something wrong, I see that it is because of missing information, not because it is not smart enough. If I needed any more evidence for Strong AI since AlphaGo, that would be it.

          I wouldn't call it vibe coding; I just have much more time to focus on the spec than on the code. I'd call that a sufficiently smart compiler.

      • frumplestlatz a day ago

        This is a very reductive definition of types, if not a facile category error entirely (see: curry-howard), and what you call "premature optimization" -- if we're discussing type systems -- is really "a best effort at formalizations within which we can do useful work".

        AL doesn’t make types obsolete -- it just relocates the same constraints into another formalism. You still have types, you’re just not calling them that.

        • practal a day ago

          I think I have a reference to Curry in my summary link. Anyways, curry-howard is a nice correspondence, about as important to AL as the correspondence between the groups (ℝ, 0, +) and (ℝ \ 0, 1, *); by which I mean, not at all. But type people like bringing it up even when it is not relevant at all.

          No, sorry, I really don't have types. Maybe trying to reduce all approaches to logic to curry-howard is the very reductive view here.

          • frumplestlatz a day ago

            If your system encodes invariants, constrains terms, and supports abstraction via formal rules, then you’re doing the work of types whether you like the name or not.

            Dismissing Curry–Howard without addressing its foundational and extricable relevance to computation and logic isn’t a rebuttal.

            • practal a day ago

              Saying "Curry-Howard, Curry-Howard, Curry-Howard" isn't an argument, either.

              I am not saying that types cannot do this work. I am saying that to do this work you don't need types, and AL is the proof for that. Well, first-order logic is already the proof for that, but it doesn't have general binders.

              Now, you are saying, whenever this work is done, it is Curry-Howard, but that is just plain wrong. Curry-Howard has a specific meaning, maybe read up on it.

              • frumplestlatz a day ago

                Curry–Howard applies when there’s a computational interpretation of proofs — like in AL, which encodes computation and abstraction in a logic.

                You don’t get to do type-like work, then deny the structural analogy just because you renamed the machinery. It’s a type system built while insisting type systems are obsolete.

                • practal a day ago

                  You seem to know AL very well, I didn't even know that there is a computational interpretation of AL proofs! Can you tell me what it is?

      • _flux a day ago

        Personally I would enjoy if TLA+ would have types, though, and TLA+ belongs to the land of logic, right? I do not know how it differs from the abstraction logic referred in your writing and your other whitepapers.

        What is commonly used is a TypeOK predicate that verifies that your variables have the expected type. This is fine, except your intermediate values can still end up being of mis-intended values, so you won't spot the mistake until you evaluate the TypeOK predicate, and not at all if the checker doesn't visit the right corners of the state space. At least TypeOK can be much more expressive than any type system.

        There is a new project in the same domain called Quint, it has types.

        • practal a day ago

          Practically, in abstraction logic (AL) I would solve that (AL is not a practical thing yet, unlike TLA+, I need libraries and tools for it) by having an error value ⊥, and making sure that abstractions return ⊥ whenever ⊥ is an argument of the abstraction, or when the return value is otherwise not well-defined [1]. For example,

              div 7 0 = ⊥, 
          
          and

              plus 3 ⊥ = ⊥, 
          
          so

              plus 3 (div 7 0) = ⊥.
          
          
          In principle, that could be done in TLA+ as well, I would guess. So you would have to prove a predicate Defined, where Defined x just means x ≠ ⊥.

          [1] Actually, it is probably rather the other way around: Make sure that if your expression e is well-defined under certain preconditions, that you can then prove e ≠ ⊥ under the same preconditions.

          • thechao a day ago

            This looks like you've defined the abstract lattice extension as a proper superset of the concrete semantics. That sort of analysis is entirely subsumed by Cousot & Cousot's work, right? Abstract Interpretation doesn't require static types; in fact, the imposition of a two-level interpretation is secondary. The constraints on the pre-level are so that the behavior of the program can be checked "about as quickly as parsing", while also giving strong correctness guarantees under the systems being checked.

            Moving the whole thing to dynamic behavior doesn't tell us anything new, does it? Lisps have been tagged & checked for decades?

            • practal a day ago

              I have not defined any "abstract lattice extension" explicitly; which is nice, why would I need to know about lattices for something as simple as this? It is just a convention I suggest to get a useful Defined predicate, actually. Nobody can stop you from defining mul 0 ⊥ = 0, for example, and that might make sense sometimes.

              I would suggest that abstraction logic compares to type theory as Lisp compares to Standard ML; but that is just an analogy and not precise.

              • thechao 19 hours ago

                My problem with your interaction on this forum, so far, is that it kind of ignores the practical (artisan) craft of programming. The use of static type systems by working engineers isn't math; it's more akin to "fast, formal checking". You can assert that you don't like types, or they're useless all day long; but, every working engineer knows you're wrong: static type systems have solved a real problem. Your arguments need to be rooted in the basic experience of a working engineer. Single-level programming has its place, but runtime checking is too error prone in reality to build large software. If you think this isn't true, then you need to show up to the table with large scale untyped software. For instance: the FORTH community has real advantages they've demonstrated. Where's your OS? Your database? Your browser? If you have real advantages then you'd have these things in hand, already.

                Otherwise, you're just another academic blowhard using a thin veneer of formalism to justify your own motivated reasoning.

                • practal 19 hours ago

                  Strong opinion! Also, it seems you didn't read my interaction so far carefully enough. You also didn't read the summary I linked to, otherwise you would know that I fully approve the current use of types in programming languages, as they provide lightweight formal methods (another name for what you call "fast, formal checking"). But I think we will be able to do much better, we will have fast, formal checking without static types. Not today, not tomorrow, but certainly not more than 10 years from here.

                  • thechao 16 hours ago

                    I'm reading through your paper, and I really like your ideas. I just think you're approaching the engineers wrong.

        • deredede a day ago

          > At least TypeOK can be much more expressive than any type system.

          Can you clarify what you mean by that? Dependent types or more practically refinement types (à la F*) can embed arbitrary predicates.

          • _flux 3 hours ago

            Right, I was referring totype systems in relatively popular real-world programming languages of today, i.e. Haskell, OCaml, Rust, Haskell :).

      • exceptione a day ago

        > But in a logic,

        I am not sure if I misunderstand you. Types are for domain, real world semantics, they help to disambiguate human language, they make context explicit which humans just assume when they talk about their domain.

        Logic is abstract. If you implied people should be able to express a type system in their host language, that would be interesting. I can see something like Prolog as type annotations, embedded in any programming language, it would give tons of flexibility, but then you shift quite some burden onto the programmer.

        Has this idea been tried?

        • practal a day ago

          Types for real-world semantics are fine, they are pretty much like predicates if you understand them like that.

          The idea to use predicates instead of types has been tried many times; the main problem (I think) is that you still need a nice way of binding variables, and types seem the only way to do so, so you will introduce types anyway, and what is the point then? The nice thing about AL is that you can have a general variable binding mechanism without having to introduce types.

          • frumplestlatz a day ago

            AL as described sounds like it reinvents parts of the meta-theoretic infrastructure of Isabelle/HOL, but repackaged as a clean break from type theory instead of what it seems to be — a notational reshuffling of well-trod ideas in type theory.

            What am I missing?

            • practal a day ago

              Given that I am an Isabelle user and/or developer since about 1996, similarities with Isabelle are certainly not accidental. I think Isabelle got it basically right: its only problem (in my opinion) is that it is based on intuitionistic type theory as a metalogic and not abstraction logic (nevertheless, most type theorists pretty much ignored Isabelle!). Abstraction logic has a simple semantics; ITT does not. My bet is that this conceptual simplicity is relevant in practice. We will see if that is actually the case or not. I've written a few words about that in the abstraction logic book on page 118, also available in the sample.

              > — a notational reshuffling of well-trod ideas in type theory

              Always fun to encounter disparaging comments (I see that you deleted the other one in the other thread), but I wrote this answer more for the other readers than for you.

      • Timwi 10 hours ago

        An HTML version that I can just read on a phone would be nice.

      • agumonkey a day ago

        I'm not a logician but do you mean that predicates and their algebra are a more granular and universal way to describe what a type is.. basically that names are a problem ?

        • practal a day ago

          Yes and no. Yes, predicates are more flexible, because they can range over the entire mathematical universe, as they do for example in (one-sorted) first-order logic. No, names are not a problem, predicates can have names, too.

          • agumonkey a day ago

            so if names are not an issue, the problem with the usual static type systems is that they lack a way to manipulate / recombine user defined types to avoid expressive dead ends ?

            • practal 21 hours ago

              What is the type of x / y, for x : ℝ and y : ℝ?

              • magicalhippo 11 hours ago

                What is the type of x and y? If it's ℝ, why wouldn't x / y be of type *ℝ (hyperreals)?

                I'm just your average programmer, not into abstract logic stuff, but got curious.

                • hansvm an hour ago

                  The hyperreals don't fix the x/0 or 0/0 problems. Infinitesimals have a well-defined division and take the place of a lot of uses of 0 in the reals, but the hyperreals still have a 0, and the same problem posed in their comment exists when you consider division on the hyperreals as well.

                  I'm also curious what they intended, but there aren't many options:

                  1. The question is ill-posed. The input types are too broad.

                  2. You must extend ℝ with at least one additional point representing the result. Every choice is bad for a number of reasons (e.g., you no longer have multiplicative inverses and might not even have well-defined addition or subtraction on the resulting set). Some are easier to work with than others. A dedicated `undefined` value is usually easy enough to work with, though sometimes a single "infinity" isn't terrible (if you consider negative infinities it gets more terrible).

                  3. You arbitrarily choose some real-valued result. Basically every theorem or application considering division now doesn't have to special-case zero (because x/0 is defined) but still has to special-case zero (because every choice is wrong for most use cases), leading to no real improvements.

              • bmacho 5 hours ago

                I would say ℝ u {undefined}

                • agumonkey 4 hours ago

                  undefined because x / 0 ? ℝ includes infinity

                  • hansvm an hour ago

                    It's undefined because there isn't an obvious choice:

                    - At x:=0, every real is a reasonable limit for some path through the euclidean plane to (0,0). Ignore the 0/0 problem, then x/0 still has either positive or negative infinity as reasonable choices. Suppose you choose a different extension like only having a single point at infinity; then you give up other properties like being able to add and subtract all members of the set you're considering.

                    - However you define x/0, it still doesn't work as a multiplicative inverse for arbitrary x.

                    A good question to ask is why you want to compute x/0 in the first place. It is, e.g., sometimes true that doing so allows you to compute results arithmetically and ignore intermediate infinities. Doing so is usually dangerous though, since your intuition no longer lines up with the actual properties of the system, and most techniques you might want to apply are no longer valid. Certain definitions are more amenable to being situationally useful (like the one-point compactification of the reals), but without a goal in mind the definition you choose is unlikely to be any better than either treating division as a non-total function (not actually defined on all of ℝ), or, equivalently, considering its output to be ℝ extended with {undefined}.

                    Not that it directly applies to your question, ℝ typically does not include infinity. ℝ⁺ is one symbol sometimes used for representing the reals with two infinities, though notation varies both for that and the one-point compactification.

              • agumonkey 18 hours ago

                I'm tempted to say ℝ but it seems too obvious

      • cardanome a day ago

        Is there any programming language based on abstraction logic?

        This is all a bit too abstract for me right now but seems interesting.

        • practal a day ago

          There is nothing practically usable right now. I hope there will be before the end of the year. Algebraic effects seem an interesting feature to include from the start, they seem conceptually very close to abstraction algebra.

    • hinoki a day ago

      Forget it Jake, it’s land of Lisp.

nevertoolate a day ago

When I see a new (for me) idea coming from (presumably) category theory I wonder if it really will land in any mainstream language. In my experience having cohesion on the philosophical level of the language is the reason why it is nice to work with it in a team of programmers who are adept in both programming and in the business context. A set of programming patterns to solve a problem usually can be replaced with a possibly disjunct set of patterns where both solutions have all the same ilities in the code and solve the business problem.

My question is - can a mainstream language adopt the algebraic effects (handlers?) without creating deep confusion or a new language should be built from the ground up building on top of these abstractions in some form.

  • ww520 a day ago

    > can a mainstream language adopt the algebraic effects (handlers?) without creating deep confusion or a new language should be built from the ground up building on top of these abstractions in some form.

    Algebraic Effect is a variant/enhancement of dependency injection formalized into a language. Dependency injection has massive usage in the wild for a long time with just library implementation.

    • threeseed a day ago

      > Algebraic Effect is a variant/enhancement of dependency injection

      Every library so far that has implemented effects e.g. Cats, ZIO, Effects has done so to make concurrency easier and safer.

      Not for dependency injection.

  • iamwil 14 hours ago

    A weak form of algebraic effects are already very common: React hooks.

    React hooks are different from full-blown algebraic effects in a couple ways:

    - The handler for the hooks are already implemented for you, and you can't swap it out for a different handler. For example, the implementation of useState is fixed, and you can't swap it out for a different implementation.

    - They're not multishot resumption. When a hook is raised, the handler can only handle it and resume it once. In a full-blown algebraic effect, the handler can resume the same raised effect multiple times.

    - Algebraic effects usually come bundled together. Those effects have specific compositional rules with each other. That's how they're algebraic.

  • nwienert a day ago

    React hooks are them, basically. Not at the language level, but widely adopted and understood.

z5h 20 hours ago

After spending a lot if time in Prolog, I want a nice way to implement and compose nondeterministic functions and also have a compile time type check. I’m eyeing all of these languages as a result. I’ll watch Ante as well. (Don’t forget developer tools like an LSP, tree-sitter or other editor plugins).

  • jfecher 17 hours ago

    Author of Ante here - it actually already has an (extremely basic) LSP. Tooling more or less required for new languages out of the gate these days and I'm eyeing the debugging experience too to see if I can get replayability at least in debug mode by default.

aatd86 a day ago

I might be a bit dense but I didn't quite get it and the examples didn't help me. For instance, the first example SayMessage. Is it supposed to be an effect? Why? From the function signature it could well be a noop and we wouldn't know the difference. Or is it arbitrarily decided? Is this all about notations for side-effectful operations?

  • scalaisneat a day ago

    because it is declared as an effect - and implements a handle.

    Think of it more like an interface. It turns out that many common patterns - async, IO, yielding can all be expressed with a handle - and the effect can be represented in the signature.

    This allows the code to have which effect its ran in, at runtime - other commenters pointed out its very similar to dependency injection.

    • aatd86 18 hours ago

      Ok so we are basically defining a named function signature that represents a side effect shape.

      Then the implementation of these side effect is done elsewhere and assigned as the capability of a function, which is now deemed as impure by the compiler.

      You're right, it looks a bit like an interface.

  • anon-3988 a day ago

    if I have to guess, it's something like this function will call fopen, fwrite, etc somewhere downstream.

    of course I am hoping to get corrected on this.

    • lblume 18 hours ago

      It might, if the effect is handled using these functions. You could also handle the effect using another handler, doing something entirely different. It is up to the caller how to handle effects, or propagate them up.

nikita2206 a day ago

Have you thought of using generators as a closest example to compare effects to? I think they are much closer to effects than exceptions are. Great explainer anyway, it was tge first time I have read about this idea and it was immediately obvious

  • iamwil 14 hours ago

    Full blown algebraic effects are multi-shot, meaning you can resume from the same raised effect multiple times. Generators can only resume from a point in the execution a single time.

    But yes, you can implement single-shot effects with generators.

  • yen223 a day ago

    Exceptions are usually used because the syntax for "performing" exceptions (throw) vs handling exceptions (try-catch) is familiar to most programmers, and is basically the same as the syntax for "performing" an effect vs handling the effect, except that the latter also includes resuming a function.

    It would be cool to see how generators will be implemented with algebraic effects.

rixed a day ago

Maybe I'm too archaic but I do not share the author's hope that algebraic effects will ever become prevalently used. They certainly can be useful now and then, but the similitude with dynamic scoping brings too many painful memories.

  • threeseed a day ago

    I wouldn't worry. Even the simplest and most friendly effects library: https://effect.website

    Shows clearly why they will never be a mainstream concept. The value proposition is only there when you have more elaborate concurrency needs. But that is a tiny fraction of the applications most people are writing today.

    • codethief 9 hours ago

      > The value proposition is only there when you have more elaborate concurrency needs

      I see the value every day when I look at a shitty piece of JavaScript or Python code that secretly modifies some global state behind the scenes.

      Or when I want to do dependency injection without resorting to frameworks doing some magic. Yes, I could drill my dependencies through the entire call stack, and often do because it's nicely explicit. But stuff like `logger` really doesn't belong in the function signature if it can be avoided.

    • marcosdumay 17 hours ago

      Yeah, if you impose that effects must only be used to enforce code correctness, then only code that is hard to write correctly will benefit.

    • lblume 18 hours ago

      > the simplest and most friendly effects library

      Highly unlikely. JavaScript simply isn't a language built with this kind of evaluation model in mind, so an external library introducing completely orthogonal concepts surely should not be modeled as "most friendly". Don't get me wrong — Effect can be great! But the library also deliberately does not market itself as simple, rather as modular and multi-faceted, a "toolbox" from which developers may (should?) choose some tools and omit others.

      A language that supports effects first-hand, like the proposed "Ante", would provide a much a more expressive, possibly simpler, and definitely friendlier approach than TS effect ever could.

carterschonwald a day ago

I kinda want algebraic effects but where you can locally declare that certain effects are implicit/invisible in the types.

evelant a day ago

This is neat, but you don’t need a new language to leverage these concepts. https://effect.website/ The effect library brings all of this goodness to typescript (and then some) and is robust and production ready. I hate writing typescript without it these days.

  • iamwil 14 hours ago

    It's mistaken that you don't need a new language to implement full-blown algebraic effects.

    Full-blown algebraic effects are multi-shot, meaning that you can "resume the thrown exception" multiple times. The only way to do that is through delimited continuations, which isn't available as a feature in most languages, and can't be emulated by other language features--unless you reimplement the call stack in userland.

    In addition, the handler can decide not to resume a raised effect at all, at which point it can exit. When it does, it's as if all the computation that it did when the handler wrapped the computation didn't happen at all. It's a way of doing backtracking. This is also possible due to delimited continuations.

    Both are things that you can't do in the effects library.

  • chamomeal a day ago

    I’ve heard that using effect in TS almost like using another language. Like it’s “all or nothing”, either your whole program is using effect or it’s not. But obvi you can still use all of typescript’s features and ecosystem.

    Do you also feel like effect is “or or nothing”? Or could you enclose certain parts of a program into effect parts?

    • evelant 21 hours ago

      You can definitely only write parts of your app in effect. That’s what I do and it works great. Just be mindful about where you want that boundary between effect and non effect parts to be and it’s easy. The effect parts end up so much more maintainable and robust.

yyyk a day ago

The state effects example seems unlike the others - the examples avoid syntax for indentation, omit polymorphic effect mention and use minimal syntax for functions - but for state effects you need to repeat "can Use Strings" each function? Presumably one may want to group those under type Strings or can Use Strings, at which point you have a namespace of sorts...

xixixao a day ago

It feels powerful. I think the effects in return types could be inferred.

But I share the concerns of others about the downsides of dependency injection. And this is DI on steroids.

For testing, I much prefer to “override” (mock) the single concrete implementation in the test environment, rather than to lose the static caller -> callee relationship in non-test code.

riyazahuja a day ago

What’s the advantage here of using effects over monads? It seems to me that all the proposed benefits of effects are reproducible/reproduced already by monads. Is it simply to get stateful actions while still being pure in a dynamic type system rather than static?

  • iamwil 14 hours ago

    My understanding is that monads can only be composed in specific orders or not at all without monad transformers.

    The "algebraic" part of algebraic effects is that the effects also come with implicit compositional rules that say how you can put them together. You can design a family effects with rules like commutativity and associativity, so that you can be sure of how to compose them.

  • hocuspocus 19 hours ago

    Better ergonomics, direct style syntax, no need for monad transformers. There might be second order advantages like small performance gains compared to monadic effects.

ollysb a day ago

As I understand it this was the inspiration for React's hooks model. The compiler won't give you the same assurances but in practice hooks do at least allow to inject effects into components.

  • YuukiRey a day ago

    I don’t see the similarity. Since hooks aren’t actually passed to, or injected into components, there’s no way to evaluate the same hooks in different ways.

    I can’t have a hook that talks to a real API in one environment but to a fake one in another. I’d have to use Jest style mocking, which is more like monkey patching.

    From the point of view of a React end user, there’s also no list of effects that I can access. I can’t see which effects or hooks a component carries around, which ones weren’t yet evaluated, and so on.

    • ollysb a day ago

      You're right, it's the use of the Context that allows for the injection of the effects. It's also all handled at runtime which does unfortunately mean that the contexts supplying the effects can't be required at compile time.

  • ww520 a day ago

    It's different. Hooks in React is basically callback on dependency of state changes. It's more similar to the signaling system.

knuckleheads a day ago

First time in a long while where I’ve read the intro to a piece about new programming languages and not recognized any of the examples given at all even vaguely. How times change!

ChuckMcM 13 hours ago

As a coding abstraction I really like this (not sure I'm completely understanding it, but it sounds handy.) I wonder if that's because I spent a couple of years doing kernel programming at Sun? Its nice to be able to write sleep(foo) and know that when your code starts running again after that it's because foo woke you up. That saves a ton of time wiring up control flow and trying to cover all the edge cases. Caveat the memory locality question, it would be fun to initialize all your functions waiting to catch a unit to work on and then writing your algorithm explicitly in unit mutations.

charcircuit a day ago

This doesn't give a focused explaination on why. I don't see how dependency injection is a benefit when languages without algebraic effects also have dependency injection. It doesn't explain if this dependency injections is faster to execute or compile or what.

  • cryptonector a day ago

    It's the same as with monads:

    1) Testing. Write pure code with "effects" but, while in production the effects are real interactions with the real world, in testing they are mocked. This allows you to write pure code that does I/O, as opposed to writing pure code that doesn't do I/O and needs a procedural shell around it that does do the I/O -- you get to write tests for more of your code this way.

    2) Sandboxing. Like in (1), but where your mock isn't a mock but a firewall that limits what the code can do.

    (2) is a highly-desirable use-case. Think of it as a mitigation for supply-chain vulnerabilities. Think of log4j.

    Both of these are doable with monads as it is. Effects can be more ergonomic. But they're also more dynamic, which complicates the implementations. Dynamic features are always more costly than static features.

    • charcircuit a day ago

      Again you are listing things that are possible but not explaining why it's better to do it via algebraic effects as opposed to the alternatives.

      For example if you were in a meeting with Oracle to try and convince them to invest 100 million dollars for adding algebraic effects to Java and its ecosystem how would you convince them it would be providing enough value to developers to justify it over some other improvement they may want to do.

      For example, "Writing mocks for tests using algebraic effects is better than using jmock because ..."

      • threeseed a day ago

        Take the following requirement:

        "A user has made an API call. I want you to in parallel race two concurrent tasks: check if the data is in (1) cache and (2) database. Whichever returns fastest return to the user. Otherwise kill the other task mid-flight and make sure the connection resources for both are cleaned up".

        This is trivial with an effect systems like ZIO and it will work flawlessly. That's the benefit of effect systems. Use cases like this are made easy.

        But now with JVM Virtual Threads there are frameworks like Ox: https://github.com/softwaremill/ox which allow you to achieve the same thing without effects. And how many times do you really need that sort of capability ?

      • cryptonector a day ago

        The only reason I can think of -but I'm not the right person to ask- is ergonomics, that in many cases it might be easier to push an effect handler than to build a whole monad or whatever. Elsewhere in this thread there's talk of effects solving the color problem.

  • yen223 a day ago

    The way dependency injection is implemented in mainstream languages usually involves using metaprogramming to work around the language, not with the language. It's not uncommon to get errors in dependency-injected code that would be impossible to get with normal code.

    It's interesting to see how things can work if the language itself was designed to support dependency injection from the get-go. Algebraic effects is one of the ways to achieve that.

    • vlovich123 a day ago

      Don’t algebraic effects offer a compelling answer to the color problem and all sorts of related similar things?

      • threeseed a day ago

        But they also introduce their own color-like problems.

        For example with Scala we have ZIO which is an effect system where you wrap all your code in their type e.g. getName(): ZIO[String]. And it doesn't matter if getName returns immediately or in the future which is nice.

        But then the problem is that you can't use normal operators e.g. for/while/if-else you need to use their versions e.g. ZIO.if / ZIO.repeat.

        So you don't have the colour problem because everything is their colour.

        • OtomotO a day ago

          But that's only a problem if it's a library and not done on the language level?!

          • threeseed a day ago

            But in the research languages listed they still are colouring function types.

            So it doesn't seem to matter whether it's a library or in the language.

            Either everything is an effect. Or you have to deal with two worlds of code: effects and non-effects.

            • sullyj3 13 hours ago

              Functions can be polymorphic in their effectfulness, so the coloring problem isn't. Functions only become incompatible where you've made them incompatible on purpose - the whole point of annotating functions' effectfulness is to statically know you're not accidentally invoking particular effects where you promised you wouldn't.

            • jfecher 10 hours ago

              Developer of Ante here. Functions in languages with effect systems are usually effect polymorphic. You can see the example in the article of a polymorphic map function which accepts functions performing any effect(s) including no effect(s). For this reason effect systems are one of the solutions to the "what color is your function" problem.

            • vlovich123 17 hours ago

              Maybe but even if that's the case, you've now got just plain and colored vs before you'd have plain, color A, color B, color C if you go beyond just async as the only other color (e.g. whether you're accepting something by reference or value, whether it's a mutable reference, whether or not the function is compile-time, etc etc etc).

              Still seems better to me if you collapse colors >= 1 into a single language system.

    • charcircuit a day ago

      >It's interesting

      Which is why I was asking for that interesting thing to be written in the article on why it would better.

  • iamwil 13 hours ago

    Because Algebraic Effects are unfamiliar, we explain a part of it with something that's familiar to most developers.

    This is just like the story of the blind men with the elephant. The elephant (algebraic effects) is a whole other different beast, but to help the beginner understand the first parts of it, we say the truck is like a snake (resumable exceptions) or that the ear is like a fan (dependency injection).

    Algebraic effects are a new type of control flow. When an effect is raised, the handler (defined further up the call stack) has several options:

    - handle the effect and resume the computation where the effect was raised.

    - handle the effect, but resume the computation multiple times back to where the effect was raised.

    - decide not to handle the effect, and just exit, at which point the execution resumes right when the handler was called further up the stack, and everything executed since then is as if it never happened. This is a way of doing back tracking.

    In addition, what's algebraic about algebraic effects is that they usually come in a group, and are designed with a compositional algebra, so you can safely compose them in different ways. A common one is commutativity, so it doesn't matter what order you execute them in. Of course, not all algebraic effects have commutativity, as that has to be designed by the implementer. Monads are different in that they often don't compose without monad transformers, so that can be finicky.

    Hence, with all these things together, you can use them to implement control flow in other languages that are typically baked into the language, such as exceptions, coroutines, generators, async/await, probabilistic programming, backtracking search, dependency injection. You might also invent your own!

    But most commonly here, we use them to separate the *intent* to do a side-effect from the actual execution of the side-effect. That way, side-effects are controlled, and it becomes easier to reason about our code.

thdhhghgbhy a day ago

Just seems to be a way of organising code really.

  • aatd86 17 hours ago

    that's a good question actually. We can define and assign side effects. But is it sufficient to control the list of side effectful operations of a function? Can the compiler check that a function is pure besides its assigned side effects?

    What about state modified via closures?

    Or is it only for system side-effects like filesystem operations?

    • thdhhghgbhy 16 hours ago

      >Can the compiler check that a function is pure besides its assigned side effects?

      The compiler can't. It is up to the programmer to make sure that a database effect doesn't, say for sake of example, write out to a file also.

      So in the end this is just a novel new way to organise code.

      • jfecher 10 hours ago

        If the database effect wrote to a file it'd require the `IO` effect and code using it would need that effect as well. A compiler can generally show a function to be free of most side effects if it uses no effects. The exceptions to this are things like divergence. As long as the language is Turing complete you can't prove it won't loop forever of course. Another exception could be extern functions which the compiler can't verify the correctness of the type signature. Different languages handle these differently but if users are allowed to write any (and the language doesn't force them to have an IO effect) then they can be a source of unsafety. Languages like Koka and Effekt are considered pure though and enforce this through their effect systems.

        • aatd86 2 hours ago

          so there is side effects propagation and inference I guess.

          That has to require some help from the tooling/IDE.

artemonster a day ago

With AE you get for free: generators, stackful coroutines, dependency injection, "dynamic" variables (as in anti-lexical ones), resumable exceptions, advanced error handling and much more. all packaged neatly into ONE concept. I dream of TS and Effekt some day merging :)

  • evelant a day ago

    You can already have all of this goodness (and then some) in typescript https://effect.website/ -- Writing TS without Effect is difficult for me now, it's like a whole new and better language.

thrance a day ago

I like the idea of Algebraic effects but I'm a little skeptical of the amount of extra syntax.

Let's say I'm building a web server, my endpoint handler now needs to declare that it can call the database, call the s3, throw x, y and z... And same story for most of the functions it calls itself. You solved the "coloration problem" at the cost of adding a thousand colors.

Checked exceptions are the ideal error handling (imho) but no one uses them properly because it's a hassle declaring every error types a function may return. And adding an exception to a function means you need to add/handle it in many of its callers, and their callers in turn, etc.

  • hocuspocus a day ago

    Two things:

    - Only your outermost application entry-point will need to stack all effects; you'll wire the server, client, DB and so on in your `main` method that returns Unit \ (IO & Abort & Result & Transaction & Env ...). In specific modules individual functions should use only the narrowest effect.

    - Good ergonomics for type polymorphism and type inference are obviously paramount to adding algebraic effects in a way that is minimally invasive. It's obviously not trivial and mostly a PL research field at this point, but existing implementations show that there's potential for better DX compared to alternatives (monadic effect systems, mostly).

nabla9 a day ago

Yet another thing Common Lisp programmers have been doing since the time of hoe and axe.

  • ctenb a day ago

    But Lisp is untyped, which is a major difference. EDIT: dynamically typed I mean