[PHP-DEV] PHP True Async RFC Stage 4

Good day, everyone. I hope you're doing well.

I’m happy to present the fourth version of the RFC. It wasn’t just me
who worked on it — members of the PHP community contributed as well.
Many thanks to everyone for your input!

**What has changed in this version?**

The RFC has been significantly simplified:

1. Components (such as TaskGroup) that can be discussed in separate
RFCs have been removed from the current one.
2. Coroutines can now be created anywhere — even inside shutdown_function.
3. Added Memory Management and Garbage Collection section

Although work on the previous API RFC was interrupted and we weren’t
able to include it in PHP 8.5, it still provided valuable feedback on
the Async API code.

During this time, I managed to refactor and optimize the TrueAsync
code, which showed promising performance results in I/O scenarios.

A test integration between **NGINX UNIT** and the **TrueAsync API**
was implemented to evaluate the possibility of using PHP as an
asynchronous backend for a web server:

During this time, the project has come very close to beta status.

Once again, I want to thank everyone who supported me during difficult
times, offered advice, and helped develop this project.

Given the maturity of both the code and the RFC, this time I hope to
proceed with a vote.

Wishing you all a great day, and thank you for your feedback!

1 Like

On Sun, Oct 5, 2025 at 7:51 AM Edmond Dantes <edmond.ht@gmail.com> wrote:

Good day, everyone. I hope you’re doing well.

I’m happy to present the fourth version of the RFC. It wasn’t just me
who worked on it — members of the PHP community contributed as well.
Many thanks to everyone for your input!

https://wiki.php.net/rfc/true_async

What has changed in this version?

The RFC has been significantly simplified:

  1. Components (such as TaskGroup) that can be discussed in separate
    RFCs have been removed from the current one.
  2. Coroutines can now be created anywhere — even inside shutdown_function.
  3. Added Memory Management and Garbage Collection section

Although work on the previous API RFC was interrupted and we weren’t
able to include it in PHP 8.5, it still provided valuable feedback on
the Async API code.

During this time, I managed to refactor and optimize the TrueAsync
code, which showed promising performance results in I/O scenarios.

A test integration between NGINX UNIT and the TrueAsync API
was implemented to evaluate the possibility of using PHP as an
asynchronous backend for a web server:
https://github.com/EdmondDantes/nginx-unit/tree/true-async/src/true-async-php

During this time, the project has come very close to beta status.

Once again, I want to thank everyone who supported me during difficult
times, offered advice, and helped develop this project.

Given the maturity of both the code and the RFC, this time I hope to
proceed with a vote.

Wishing you all a great day, and thank you for your feedback!

Hi, I am so looking forward to this capability!

Just a quick question - other methods that tried to provide async/parallel type functionality previously were only available via the CLI.

I can see a big opportunity for people running websites with Apache + PHP-FPM where on each page request you do stuff like:

Call API 1 (e.g. external auth component)
Call API 2 (e.g. product catalogue)
Call API 3 (e.g. setup payment processor)

Am hoping that you could put these three calls within a Scope and therefore have all three calls run at the same time, and only have to wait as long as the slowest API, rather than the combination of all 3 response times.

I didn’t see anything in the RFC about this, so just wanted to check.

Thanks,
Adam

Hello.

Just a quick question - other methods that tried to provide async/parallel type functionality previously were only available via the CLI.

TrueAsync itself is integrated into PHP in such a way that it is
always active. The scenario you described is technically possible (Of
course, this can also be useful for sending telemetry in a way that
doesn’t interfere with request processing.), but it’s not particularly
relevant in the context of modern development.

Why?

Because client requests are usually processed sequentially, step by
step. Parallel tasks are rare. Therefore, from the server’s
perspective, the main benefit of concurrency is the ability to handle
multiple requests within a single process. The same thing that Swoole,
AMPHP, and other modern backend solutions do.

And this is one of the reasons why FPM is morally outdated and
therefore not used in stateful backends. That’s why you encounter CLI
so often.

On Mon, Oct 6, 2025, at 12:34 AM, Edmond Dantes wrote:

Hello.

Just a quick question - other methods that tried to provide async/parallel type functionality previously were only available via the CLI.

TrueAsync itself is integrated into PHP in such a way that it is
always active. The scenario you described is technically possible (Of
course, this can also be useful for sending telemetry in a way that
doesn’t interfere with request processing.), but it’s not particularly
relevant in the context of modern development.

Why?

Because client requests are usually processed sequentially, step by
step. Parallel tasks are rare.

This is simply not true. The example you're replying to is quite common.

It's even more common for the database. WordPress, Drupal, and many other such systems frequently run different DB queries to build different components of a page. (Blocks, widgets, components, the names differ.) Being able to do those in parallel is a natural optimization that we were thinking about in Drupal nearly 15 years ago, but it wasn't viable at the time.

Therefore, from the server’s
perspective, the main benefit of concurrency is the ability to handle
multiple requests within a single process.

That is *A benefit*. It is not the *only benefit*. Being able to compress the time of each request in a shared-nothing model is absolutely valuable.

Remember, in the wild, PHP-FPM and mod_php are by orders of magnitude the most common ways PHP is executed. React, Swoole, etc. are rounding errors in most of the market. And the alternate runtime with the most momentum is FrankenPHP, which reuses processes but is still "one request in a process at a time."

The same thing that Swoole,
AMPHP, and other modern backend solutions do.

And this is one of the reasons why FPM is morally outdated and

I am going to assume this is a translation issue, because "morally outdated" is the wrong term here. "Morally outdated" is how you'd describe "racial segregation is good, actually." Not "this technology is slower than we need it to be." You probably mean "severely outdated" or something along those lines.

Which, as I explained above, is simply not true. PHP is going to be running in a mostly shared-nothing environment for the foreseeable future. Those use cases still would benefit from async support.

--Larry Garfield

Hi.

This is simply not true. The example you're replying to is quite common.

It’s probably my poor English. So I’ll try to rephrase the idea:
The majority of database queries are executed sequentially, step by
step. Not all queries. Not always. But most of them.
This is true even in languages that already have async.

That is *A benefit*. It is not the *only benefit*. Being able to compress the time of each request in a shared-nothing model is absolutely valuable.

(It’s important not to overestimate this model, otherwise lately you
sometimes hear complaints that the ultra-trendy immutable philosophy
leads to terrible performance :))

A stateful worker does not automatically mean active sharing of state
between requests. It gives the developer the choice of what can and
cannot be shared. You have a choice. If you want all services to
follow the immutable model — you can do that. But now you don’t have
to pay for compilation or initialization. You have complete creative
freedom.

Remember, in the wild, PHP-FPM and mod_php are by orders of magnitude the most common ways PHP is executed. React, Swoole, etc. are rounding errors in most of the market. And the alternate runtime with the most momentum is FrankenPHP, which reuses processes but is still "one request in a process at a time."

Almost no one wants to spend time building code with a technology that
isn’t supported. So when people want to do things like that, they
simply choose another language.
I’m not saying that async isn’t supported in CGI mode... but..
it’s just that a gain of a few milliseconds is unlikely to be noticeable.

I am going to assume this is a translation issue, because "morally outdated" is the wrong term here.

Thank you! That’s true. But a more accurate translation would be: it’s
a technology that has become outdated not because of time, but because
the circumstances and requirements have changed. Back in the years
when CGI was evolving, things were different. There were no servers
with a dozen cores.

Hi!

Hi.

This is simply not true. The example you’re replying to is quite common.

It’s probably my poor English. So I’ll try to rephrase the idea:
The majority of database queries are executed sequentially, step by
step. Not all queries. Not always. But most of them.
This is true even in languages that already have async.

I find this a bit confusing to contextualize. I agree that most code written was probably written following the principle of 1-query-at-a-time, even in languages that already support async. But at the same time, if you’re tasked with optimizing the time it takes for a certain HTTP Endpoint to execute then caching data OR rethinking query execution flow are among the top contenders for change. What I’m trying to say is that I would look at this from a different lensis. You’re right that just because async is already available doesn’t mean that queries will take advantage of it by default. But for the critical parts of a system that requires optimization of the execution duration, having async capabilities can easily drive the decision of how the code will be restructured to fulfill the need for performance improvements.

That is A benefit. It is not the only benefit. Being able to compress the time of each request in a shared-nothing model is absolutely valuable.
(It’s important not to overestimate this model, otherwise lately you
sometimes hear complaints that the ultra-trendy immutable philosophy
leads to terrible performance :))

A stateful worker does not automatically mean active sharing of state
between requests. It gives the developer the choice of what can and
cannot be shared. You have a choice. If you want all services to
follow the immutable model — you can do that. But now you don’t have
to pay for compilation or initialization. You have complete creative
freedom.

Talking about stateful workers and shared-state here is a bit ambiguous, at least for me, tbh. When you say the developer has a choice, my interpretation is that you mean to say that the PHP Developer can choose what to share and what not to share by defining static variables, much like most other languages implement the Singleton Pattern. In PHP, especially with the share-nothing model, even static variables are cleared out. While there’s no denying that there is value in creating a shareable space for performance gains, and this is seen in popularization of Swoole, Laravel Octane, FrankenPHP Worker Mode, etc; there’s still a point to be made about the fact that 30 years worth of PHP code exists in the wild assuming that static variables gets cleared out between requests and as such are a non-trivial task to port them to newer execution models. This is where I think Larry’s point comes strong with the fact that most these new “modern / non-legacy” execution models are just a rounding error in the amount of PHP code being executed everyday and where support of async execution for FPM would be a game changer for code that is too hard to lift-and-shift into worker mode, but not so hard to make adjustments in the next PHP upgrade to e.g. parallelize database queries.

Remember, in the wild, PHP-FPM and mod_php are by orders of magnitude the most common ways PHP is executed. React, Swoole, etc. are rounding errors in most of the market. And the alternate runtime with the most momentum is FrankenPHP, which reuses processes but is still “one request in a process at a time.”

Almost no one wants to spend time building code with a technology that
isn’t supported. So when people want to do things like that, they
simply choose another language.
I’m not saying that async isn’t supported in CGI mode… but..
it’s just that a gain of a few milliseconds is unlikely to be noticeable.

If you have a report that executes 3 queries and each query averages between 4 to 5 seconds, this report takes up to 15 seconds to run in PHP. The capability of executing async code in FPM would mean a 3x performance gain on a report like this. That is far from just a few milliseconds gain. And to be honest, the biggest gain for me would be the ability to keep the applications contextual logic within a single execution unit. One very common route that is taken with today’s options is to break those 3 queries into separate HTTP endpoints and let the frontend stitch them together which provides a very similar performance gain by taking advantage of JS/Browser parallel requests since PHP is unable to do so.

···

Marco Deleu

Deleu deleugyn@gmail.com hat am 06.10.2025 19:29 CEST geschrieben:

Hi!

Hi.

This is simply not true. The example you’re replying to is quite common.

It’s probably my poor English. So I’ll try to rephrase the idea:
The majority of database queries are executed sequentially, step by
step. Not all queries. Not always. But most of them.
This is true even in languages that already have async.

I find this a bit confusing to contextualize. I agree that most code written was probably written following the principle of 1-query-at-a-time, even in languages that already support async. But at the same time, if you’re tasked with optimizing the time it takes for a certain HTTP Endpoint to execute then caching data OR rethinking query execution flow are among the top contenders for change. What I’m trying to say is that I would look at this from a different lensis. You’re right that just because async is already available doesn’t mean that queries will take advantage of it by default. But for the critical parts of a system that requires optimization of the execution duration, having async capabilities can easily drive the decision of how the code will be restructured to fulfill the need for performance improvements.

That is A benefit. It is not the only benefit. Being able to compress the time of each request in a shared-nothing model is absolutely valuable.
(It’s important not to overestimate this model, otherwise lately you
sometimes hear complaints that the ultra-trendy immutable philosophy
leads to terrible performance :))

A stateful worker does not automatically mean active sharing of state
between requests. It gives the developer the choice of what can and
cannot be shared. You have a choice. If you want all services to
follow the immutable model — you can do that. But now you don’t have
to pay for compilation or initialization. You have complete creative
freedom.

Talking about stateful workers and shared-state here is a bit ambiguous, at least for me, tbh. When you say the developer has a choice, my interpretation is that you mean to say that the PHP Developer can choose what to share and what not to share by defining static variables, much like most other languages implement the Singleton Pattern. In PHP, especially with the share-nothing model, even static variables are cleared out. While there’s no denying that there is value in creating a shareable space for performance gains, and this is seen in popularization of Swoole, Laravel Octane, FrankenPHP Worker Mode, etc; there’s still a point to be made about the fact that 30 years worth of PHP code exists in the wild assuming that static variables gets cleared out between requests and as such are a non-trivial task to port them to newer execution models. This is where I think Larry’s point comes strong with the fact that most these new “modern / non-legacy” execution models are just a rounding error in the amount of PHP code being executed everyday and where support of async execution for FPM would be a game changer for code that is too hard to lift-and-shift into worker mode, but not so hard to make adjustments in the next PHP upgrade to e.g. parallelize database queries.

Remember, in the wild, PHP-FPM and mod_php are by orders of magnitude the most common ways PHP is executed. React, Swoole, etc. are rounding errors in most of the market. And the alternate runtime with the most momentum is FrankenPHP, which reuses processes but is still “one request in a process at a time.”

Almost no one wants to spend time building code with a technology that
isn’t supported. So when people want to do things like that, they
simply choose another language.
I’m not saying that async isn’t supported in CGI mode… but..
it’s just that a gain of a few milliseconds is unlikely to be noticeable.

If you have a report that executes 3 queries and each query averages between 4 to 5 seconds, this report takes up to 15 seconds to run in PHP. The capability of executing async code in FPM would mean a 3x performance gain on a report like this. That is far from just a few milliseconds gain. And to be honest, the biggest gain for me would be the ability to keep the applications contextual logic within a single execution unit. One very common route that is taken with today’s options is to break those 3 queries into separate HTTP endpoints and let the frontend stitch them together which provides a very similar performance gain by taking advantage of JS/Browser parallel requests since PHP is unable to do so.

I’d like to mention that running queries in parallel can give better performance, but it depends on the resources available on the database server. In case disk IO is the bottleneck and a single database server is used, parallel execution can be even slower in worst case.
For MySQL/MariaDB, parallel execution normally helps (see https://dev.mysql.com/doc/refman/8.0/en/faqs-general.html#faq-mysql-support-multi-core).
For modern analytical databases using by default multiple CPU cores per query, column stores, simd and many other optimizations, parallelization is mostly not necessary since the connection time for a new connection is often slower than executing a query.

Regards
Thomas

···

Marco Deleu

1 Like

Hi.

But for the critical parts of a system that requires optimization of the execution duration

If you want to improve performance, you need to optimize SQL queries,
not try to execute them in parallel. This can bring down the entire
database (like it did today :slight_smile: )

There are only a few patterns where multiple asynchronous queries can
actually be useful. Hedged Requests for example. Question: how often
have you seen this pattern in PHP FPM applications? Probably never :slight_smile:
I know it.

Right now, there are only two significant PHP frameworks that are
ready for stateful execution. And only one of them supports
asynchronous stateful execution. This situation is caused by several
reasons, and one of them is whether or not the language itself
provides support for it.

Why is stateful execution the primary environment for async? Because
async applications are servers. And FPM is not a client-server
application. It's a plugin for a server. For a very long time, PHP was
essentially just a plugin for a web server. And a client-server
application differs from a plugin in that it starts up and processes
data streams while staying in memory. Such a process has far more use
cases for async than a process that is born and dies immediately. This
is the distinction I’m referring to.

As for the issue with frameworks: a project with several tens of
thousands of lines of code was adapted for Swoole in 2–3 weeks. It
didn’t work perfectly, sometimes it would hang, but to say that it was
really difficult… no, it wasn’t. Yes, there is a problem, yes, there
are global states in places. But if the code was written with at least
some respect for SOLID principles, this can be solved using the
Context pattern. And in reality, there isn’t that much work involved,
provided the abstractions were written reasonably well.

If you have a report that executes 3 queries and each query averages between 4 to 5 seconds,

If an SQL query takes 3...5 seconds to execute, just find another developer :slight_smile:

Developers of network applications (I’m not talking about PHP) have
accumulated a lot of optimization experience over many years of trial
and error — everything has long been known. Swoole, for example, has a
huge amount of experience, having essentially made the classic R/W
worker architecture a standard in its ecosystem.

Of course, you might say that there are simple websites for which FPM
is sufficient. But over the last two years, even for simple sites,
there’s TypeScript — and although its ecosystem may be weaker, the
language may be more complex for some people, and its performance
slightly worse — it comes with async, WebSockets, and a single
language for both frontend and backend out of the box (a killer
feature). And this trend is only going to grow stronger.

Commercial development of mid-sized projects is the only niche that
cannot be lost. These guys need Event-Driven architecture, telemetry,
services. And they ask the question: why choose a language that
doesn’t support modern technologies. Async is needed specifically for
those technologies, not for FPM.

On Mon, Oct 6, 2025, at 1:50 PM, Edmond Dantes wrote:

Of course, you might say that there are simple websites for which FPM
is sufficient. But over the last two years, even for simple sites,
there’s TypeScript — and although its ecosystem may be weaker, the
language may be more complex for some people, and its performance
slightly worse — it comes with async, WebSockets, and a single
language for both frontend and backend out of the box (a killer
feature). And this trend is only going to grow stronger.

Commercial development of mid-sized projects is the only niche that
cannot be lost. These guys need Event-Driven architecture, telemetry,
services. And they ask the question: why choose a language that
doesn’t support modern technologies. Async is needed specifically for
those technologies, not for FPM.

We must have a different definition of mid-sized, because FPM has been used for numerous mission critical large sites, like government and university sites, and has been fine. And such sites still benefit from faster telemetry, logging, etc.

Regardless, we can quibble about the percentages and what people "should" do; those are all subjective debates.

The core point is this: Any async approach in core needs to treat the FPM use case as a first-class citizen, which works the same way, just as reliably, as it would in a persistent CLI command. That is not negotiable.

If for no other reason than avoiding splitting the ecosystem into async/CLI and sync/FPM libraries, which would be an absolute disaster.

--Larry Garfield

On 06/10/2025 20:18, Larry Garfield wrote:

The core point is this: Any async approach in core needs to treat the FPM use case as a first-class citizen, which works the same way, just as reliably, as it would in a persistent CLI command. That is not negotiable.

If for no other reason than avoiding splitting the ecosystem into async/CLI and sync/FPM libraries, which would be an absolute disaster.

I 100% agree. In fact, perhaps the single biggest benefit of having a core async model would be to reverse the current fragmentation of run-times and libraries.

On 06/10/2025 19:50, Edmond Dantes wrote:

If you want to improve performance, you need to optimize SQL queries,
not try to execute them in parallel. This can bring down the entire
database (like it did today :slightly_smiling_face: )

You talk as though "the database" is a single resource, which can't be scaled out. That's not the case if you have a scalable cluster of SQL/relational databases, or a dynamically sharded NoSQL/document-based data store, or are combining data from unconnected sources.

a project with several tens of
thousands of lines of code was adapted for Swoole in 2–3 weeks. It
didn’t work perfectly, sometimes it would hang ...

2-3 weeks of development to get to something that's not even production ready is a significant investment. If your application's performance is bottlenecked on external I/O (e.g. data stores, API access), the immediate gain is probably not worth it.

For those applications, the only justification for that investment is that it unlocks a further round of development to use asynchronous I/O on those bottlenecks. What would excite me is if we can get extensions and libraries to a point where we can skip the first part, and just add async I/O to a shared-nothing application.

--
Rowan Tommins
[IMSoP]

In my opinion, PHP must add asynchronous and concurrent support as soon as possible, and asynchronous IO must be regarded as a first-class citizen.

Over the past few decades, the one-process-one-request model of PHP-FPM has been remarkably successful; it is simple and reliable. However, modern web applications do much more than merely reading from databases or caches, or handling internal HTTP requests—frequent cross-domain requests have become the norm.

The response times for these external HTTP calls are often unpredictable. Under the PHP-FPM model, delays or timeouts from certain external APIs can easily trigger a cascading failure, bringing down the entire system.

Since the emergence of ChatGPT in 2024, many software systems have been trying to integrate AI models from OpenAI, Anthropic, Google Gemini, and others.

These APIs often take tens of seconds to respond, and PHP-FPM with multi-process is almost unavailable in such scenarios.

Only asynchronous I/O offers a real solution to these challenges.

Wordpress, as the PHP application with the largest number of users, may need to add LLM capabilities in the future.

If PHP cannot provide support, Wordpress developers may also consider abandoning PHP and using other programming languages that support asynchronous IO for refactoring.

PHP must set aside its past achievements and fully embrace the Async IO tech stack.

Tianfeng Han
10/10/2025

2 Likes

On Mon, 6 Oct 2025, Larry Garfield wrote:

On Mon, Oct 6, 2025, at 1:50 PM, Edmond Dantes wrote:

> Of course, you might say that there are simple websites for which FPM
> is sufficient. But over the last two years, even for simple sites,
> there’s TypeScript — and although its ecosystem may be weaker, the
> language may be more complex for some people, and its performance
> slightly worse — it comes with async, WebSockets, and a single
> language for both frontend and backend out of the box (a killer
> feature). And this trend is only going to grow stronger.
>
> Commercial development of mid-sized projects is the only niche that
> cannot be lost. These guys need Event-Driven architecture, telemetry,
> services. And they ask the question: why choose a language that
> doesn’t support modern technologies. Async is needed specifically for
> those technologies, not for FPM.

We must have a different definition of mid-sized, because FPM has been
used for numerous mission critical large sites, like government and
university sites, and has been fine. And such sites still benefit
from faster telemetry, logging, etc.

Regardless, we can quibble about the percentages and what people
"should" do; those are all subjective debates.

The core point is this: Any async approach in core needs to treat the
FPM use case as a first-class citizen, which works the same way, just
as reliably, as it would in a persistent CLI command. That is not
negotiable.

If for no other reason than avoiding splitting the ecosystem into
async/CLI and sync/FPM libraries, which would be an absolute disaster.

I also agree with this.

I tried reading the RFC today, but I ran out of time. It is *59* page
printed (I didn't).

I think we need to be very careful that we do not introduce a feature
that allows our users to run into all sorts of problems. The symantics
of such a complex feature are going to be really important. Especially
about reasoning in which direction the code runs and flows, and how
errors are treated.

I recently read

which seems like an entirely sensible way of proceeding. Although the
title talks about Go and its problems, the "Structured Concurrency"
approach is more of a way of doing concurrency right, without the
possibility of our users getting into trouble.

I don't think the RFC as-is is close to this at all — but I have mostly
skimmed it so far.

I would also believe that discussion how this should work would work
better with a group of people - preferably in real-time - and not as an
idea and implementation of a single person. I know others have been
reviewing and commenting on it, but I don't think that's quite the same.

Concurrency in all its forms is a complex subject, and we can't really
get this wrong as we'll have to live with the concepts for a long time.

cheers,
Derick

--
https://derickrethans.nl | https://xdebug.org | https://dram.io

Author of Xdebug. Like it? Consider supporting me: Xdebug: Support

mastodon: @derickr@phpc.social @xdebug@phpc.social

Hello.

In my opinion, PHP must add asynchronous and concurrent support as soon as possible, and asynchronous IO must be regarded as a first-class citizen.

Thank you for your words.
And especially thank you for your major contribution to the
development of asynchrony in PHP. My words may sound clichéd, but now
is a good moment to say them.

If it weren’t for the Swoole project, many PHP developers wouldn’t
have had the opportunity to try asynchronous PHP out of the box along
with a full set of tools. It was fantastic.

The experience of working with Swoole became a key source of knowledge
when creating this RFC.

I would also like to express my gratitude to the maintainer of Swow, twose.

The energy and persistence with which you have tried to make the
language better out of love for PHP deserves the utmost respect —
especially because it is backed by professionalism and technical
competence. That’s an awesome combination.

Thanks again!

Hello.

I tried reading the RFC today, but I ran out of time. It is *59* page printed (I didn't).

...

I don't think the RFC as-is is close to this at all — but I have mostly skimmed it so far.

**Thank you for the feedback.**

This time there will be a vote. If this RFC is not accepted, I promise
that I will not create a fifth version. So if anyone has something to
say, please feel free to speak openly. Please.

On Tue, Oct 14, 2025, at 1:32 AM, Edmond Dantes wrote:

Hello.

I tried reading the RFC today, but I ran out of time. It is *59* page printed (I didn't).

...

I don't think the RFC as-is is close to this at all — but I have mostly skimmed it so far.

**Thank you for the feedback.**

This time there will be a vote. If this RFC is not accepted, I promise
that I will not create a fifth version. So if anyone has something to
say, please feel free to speak openly. Please.

Like Derick, I am still highly skeptical about this design. It's vastly improved from the first version back in the spring, but there are still numerous footguns in the design that will lead me to voting No on its current iteration. Mainly, we should not be allowing anything but structured, guaranteed async blocks (as described in the article Derick linked). It is still perfectly possible to build completely-async systems that way, but it prevents writing code that would only work in such an all-encompassing system.

I very much want to see it evolve further in that direction before a vote is called and we're locked into a system with so many foot guns built in.

--Larry Garfield

It's vastly improved from the first version back in the spring, but there are still numerous footguns

Which specific footguns? $scope var?

Mainly, we should not be allowing anything but structured

If you always follow the “nursery” rules, you always have to define a
coroutine just to create a nursery, even when the coroutine itself
isn’t actually needed.
That’s why we end up with hacks like “Task.detached.” (*Swift*). And
*Kotlin* keeps trying to invent workarounds.

**TrueAsync RFC** takes a different approach and gives the programmer
maximum flexibility while still complying with every principle of
structured concurrency.
At the same time, the programmer gains two styles of structured
concurrency organization, one of which fully matches Trio.

I very much want to see it evolve further in that direction before a vote is called and we're locked into a system with so many foot guns built in.

Such an approach would require more changes to the code, and I don’t
see how it would protect the programmer from mistakes any better than
this RFC does.
Of course, the with-style syntax would allow for maximum safety when
working with tasks, but that’s not an issue with this RFC.

The **Trio** model is not perfect; Kotlin and other languages do not
adopt it (maybe by accident — or maybe not).
It’s not suitable for all types of tasks, which is why the criticism is valid.

Although Kotlin is criticized for storing Scope inside objects like:
“Long-living CoroutineScope stored in objects almost always lead to
resource leaks or forgotten jobs.”
However, there is no other way to solve the problem when coroutines
need to be launched within a Scope gradually rather than all at once.

But... Ok...

async def background_manager():
    async with trio.open_nursery() as nursery:
        while True:
            event = await get_next_event()
            nursery.start_soon(handle_event, event)

^^^
The example of how the pursuit of an “ideal” ends up producing ugly code.

My position is this: **TrueAsync** should support the best patterns
for specific use cases while still remaining convenient for the
majority of tasks.
The fact that certain tools require careful handling applies to all
programming languages. That doesn’t mean those tools shouldn’t exist.

On Sun, Oct 5, 2025, at 07:23, Edmond Dantes wrote:

Good day, everyone. I hope you’re doing well.

I’m happy to present the fourth version of the RFC. It wasn’t just me
who worked on it — members of the PHP community contributed as well.
Many thanks to everyone for your input!

https://wiki.php.net/rfc/true_async

What has changed in this version?

The RFC has been significantly simplified:

  1. Components (such as TaskGroup) that can be discussed in separate
    RFCs have been removed from the current one.
  2. Coroutines can now be created anywhere — even inside shutdown_function.
  3. Added Memory Management and Garbage Collection section

Although work on the previous API RFC was interrupted and we weren’t
able to include it in PHP 8.5, it still provided valuable feedback on
the Async API code.

During this time, I managed to refactor and optimize the TrueAsync
code, which showed promising performance results in I/O scenarios.

A test integration between NGINX UNIT and the TrueAsync API
was implemented to evaluate the possibility of using PHP as an
asynchronous backend for a web server:
https://github.com/EdmondDantes/nginx-unit/tree/true-async/src/true-async-php

During this time, the project has come very close to beta status.

Once again, I want to thank everyone who supported me during difficult
times, offered advice, and helped develop this project.

Given the maturity of both the code and the RFC, this time I hope to
proceed with a vote.

Wishing you all a great day, and thank you for your feedback!

Hello,

I’m not even half way done with a list of comments and questions, but I have one that continues to bother me while reading, so I figure I will just ask it.

Why doesn’t scope implement Awaitable?

— Rob

Hello.

Why doesn’t scope implement Awaitable?

Let’s say there’s a programmer named *John* who is writing a library,
and he has a function that calls an external handler.
Programmer *Robert* wrote the `externalHandler`.

function processData(string $url, callable $externalHandler): void
{
   ...
    $externalHandler($result);
}

John knows which contracts are inside processData, but he knows
nothing about `$externalHandler`.
From the perspective of `processData`, `$externalHandler` acts as a
**black box**.
If John uses `await()` on that black box, it may lead to an infinite wait.

There are two solutions to this problem:

1. Delegate full responsibility for the program’s behavior to
*Robert*, meaning to `$externalHandler`
2. Establish a limiting contract

Therefore, if a `Scope` needs to be awaited, it can only be done
together with a cancellation token.

In real-world scenarios, awaiting a `Scope` during normal execution
makes no sense, because you have a `cancellation policy`.
This means that at any necessary moment you can dispose() the `Scope`
and thus interrupt the execution of tasks inside the black box.

For the `TaskGroup` pattern, which exists in the third version of the
RFC, awaiting is a relatively safe operation, because in this case we
assume that the code is written by someone who has direct access to
the agreements and bears full responsibility for any errors.

So, `Scope` is intended for components where responsibility is shared
between different programmers, while `TaskGroup` should be used when
working with a clearly defined set of coroutines.

On Wed, Oct 15, 2025, at 09:53, Edmond Dantes wrote:

Hello.

Why doesn’t scope implement Awaitable?

Let’s say there’s a programmer named John who is writing a library,
and he has a function that calls an external handler.
Programmer Robert wrote the externalHandler.

function processData(string $url, callable $externalHandler): void
{
...
$externalHandler($result);
}

John knows which contracts are inside processData, but he knows
nothing about $externalHandler.
From the perspective of processData, $externalHandler acts as a
black box.
If John uses await() on that black box, it may lead to an infinite wait.

There are two solutions to this problem:

  1. Delegate full responsibility for the program’s behavior to
    Robert, meaning to $externalHandler
  2. Establish a limiting contract

Therefore, if a Scope needs to be awaited, it can only be done
together with a cancellation token.

In real-world scenarios, awaiting a Scope during normal execution
makes no sense, because you have a cancellation policy.
This means that at any necessary moment you can dispose() the Scope
and thus interrupt the execution of tasks inside the black box.

For the TaskGroup pattern, which exists in the third version of the
RFC, awaiting is a relatively safe operation, because in this case we
assume that the code is written by someone who has direct access to
the agreements and bears full responsibility for any errors.

So, Scope is intended for components where responsibility is shared
between different programmers, while TaskGroup should be used when
working with a clearly defined set of coroutines.

I don’t get it. What does different programmers working on a program have to do with whether or not scopes implements Awaitable? Scope has an await method, it should be Awaitable. await() takes a cancellation and thus anything Awaitable can be cancelled at any time. I don’t see why scope is special in that regard.

If John uses await() on that black box, it may lead to an infinite wait.

This is true of any software or code. Knowing whether or not something will ever complete is called The Halting Problem. It is unsolvable, in the general sense. You can await() a read of an infinite file, or a remote file that will take 5y to read because it is being read at 1 bps. Your clock can fry on your motherboard, preventing timeouts from ever firing. Your disk can die mid-read, preventing it from ever sending you any data. There is so much that can go wrong. To say that something that has an await method isn’t Awaitable because it may never return is true for ALL Awaitable tasks as well. It isn’t special.

— Rob

I don’t get it. What does different programmers working

My main point was about contracts.
Developers were used to demonstrate breaches of agreements.
A properly defined contract with a black box helps identify errors and
limit their impact.
I don’t know how to explain it more simply. These are fundamental
elements of design in IT.