November 15th, 2021


Laravel Queues [ Pocket Guide ]

Mohamed Said from the Laravel core team had recently launched Laravel Queue Mastery for Laracasts. The series has 12 episodes and a total of 1h 41m of video content. In this article, we will make a brief recap of the most important concepts explained by Mohamed.

Dispatching and Running Jobs

You can create a new job running the following artisan command:

1php artisan make:job JobName

This will generate a new class file under app / jobs / JobName.php

Queues to Database

In order for the queues to use your database, you have to configure the queue connection in queue.php to point to your database. Once that is configured, you have to create two tables in your database: jobs and failed_jobs. You can do that by running the following two artisan commands:

1php artisan queue:table
2php artisan migrate

To start a worker you can use the following artisan command:

1php artisan queue:work

To dispatch a job to the queue you have to call the job class name followed by dispath()


Configuring Jobs

delay(5) – the worker will wait 5 seconds before dispatching each job.

public $timeout = 1 – we can decide when a job is considered and marked as failed by setting a timeout value (or how much should a worker wait until it terminates the job).

public $tries = 3 – we can configure how many times a worker should retry to process a job before considering it a failure.

public function retryUntil() {…} – Using this method, we can tell the worker to retry and process a job until .

public $backoff = 2 – Wait before trying to process the job again. In this example, the worker will wait 2 seconds before retrying.

A worker can process a single job at any given time. To process jobs faster, you need to start more workers. The number of workers you can (or should) start, depends on the server resources as well as how many services or processes are running on the server at the same time.

Prioritizing and Defining Custom Queues

If you want to dispatch jobs from a custom / higher priority queue you can use ->onQueue(‘queue-name’);. Now when starting the worker, you can specify to dispatch jobs from ‘queue-name’ first and after that he can carry on with the remaining jobs in the default queue:

1php artisan queue:work --queue=queue-name,default

Handling Attempts and Failures

public $backoff = [2, 10]; – The first value (2) tells the worker to wait 2 seconds before the second attempt. The second value (10) tells the worker to wait three seconds before the third and following attempts. If the number of tries is greater than the values specified in the $backoff = [] array, Laravel will use the last value for the remaining tries. If a job fails after all of the allowed numbers of tries, it will be stored in the database for further investigations or to be manually put back in the queue. If we want to manually retry to process a failed job, we have to get the UUID from the payload that can be found in the failed_jobs table under payload. Once we have that, we can manually retry to process the job by running the following artisan command:

1php artisan queue:retry <UUID> // UUID should be the actual UUID from the database

If the server have a “rate limiter” we can push the job back to the queue to be retried later:

1return $this->release(30); // This will overwrite the $backoff values

This job will be released back to the queue and not be retried for another 30 seconds. Similar to public $tries =; we can configure for how many times to retry to process a job in case an exception is thrown. We can define that by using public $maxExceptions. We can also decide what to do if a job fails and we can do that by writing the PHP logic inside of the job class in the public function failed($e) {…} -> an exception will be thrown.

Dispatching Workflows

Chains - A chain is a queued workflow (a group of jobs) that run one after another. If we implement a deployment workflow, we dispatch 3 jobs that are processed one after another: PullRepo, RunTests, Deploy.

1$chain = [
2 \App\Jobs\PullRepo(),
3 \App\Jobs\RunTests(),
4 \App\Jobs\Deploy(),
5 ];

If one of the jobs fails, the entire chain will be removed from the queue and won’t continue. We can dispatch the chain using the Bus facade.


Batches - A batch is a group of jobs that can run in parallel.

Dispatch a batch:


To be able to dispatch a batch workflow, you have to use the batchable trait inside your job class. You also have to create the “batches-table in your database:

1php artisan queue:batches-table

When a job inside a batch fails, the entire batch will be marked as “canceled”.

1public function handle() {
2 if($this->batch()->canceled {
3 return;
4 }
5 }

If you want, you can change the default behaviour and not rollback the entire batch if one of the jobs fails:


More Complex Workflows

If any of the jobs within a batch or a chain fail, we can catch the exception by running code inside of a closure using ->catch().

2 ->allowFailures()
3 ->catch(function ($batch, $e) {
4 //...
5 })
6 ->dispatch();

We can dispatch on different connections using ->onConnection(‘your-connection’);. We can also run code if all of the jobs whitin a batch were successfully processed passing the code inside of a closure: ->then(function ($batch) {…}); If some of the jobs fail we still can execute code using ->finally(function ($batch) {…});.

We can dispatch a chain within a batch. An array inside of the $batch array represents a chain. In this case, when the batch will be dispatched, both chains will be executed in parallel.

1$batch = [
2 // Chain 1
3 [
4 Job1
5 Job2
6 Job3
7 ],
8 // Chain 2
9 [
10 Job4
11 Job5
12 Job6
13 ],
14 ];

We can dispatch a batch inside a chain by putting the batch inside of a closure:

2 new \App\Jobs\Deploy(),
3 function () {
4 Bus::batch([ Job1, Job2, Job3 ])->dispatch();
5 }

Controlling And Limiting Jobs

Race Condition – when two or more workers are trying to process the same job at the same time. To avoid this behaviour we can lock the job:

1Cache::lock('deployments')->block(10, function() {... /* code we want to run when the lock is aquired */ });

Redis concurency limiter

2 ->limit(5)
3 ->block(10)
4 ->then(function() {...});

Redis throttle

Control the amount of locks that can be aquired with the given key durring a given period of time:

2 ->allow(10)
3 ->every(60)
4 ->block(10)
5 ->then(function() {...});

Without overlapping middleware

1public function middleware() {
2 return [
3 new (withoutOverlapping('deployments') 10);
4 ];

More Job Configuration

Should be Unique interface - Laravel uses the job class name as the key for checking if a job is already in process by a worker. To overwrite the default key, we can create a function called uniqueId().

1public function uniqueId() {
2 return 'deployments';

By default, the unique lock is released when the job is being processed. We can specify for how long we want the unique lock to be alive:

1public function uniqueFor() { return 60; }

ShouldBeUniqueUntil Interface – the job will be unique until the worker finishes processing it.

*ThrottlesExceptions Middleware:

1new ThrottlesExceptions(10)

This will prevent the job to be put back in the queue if it fails 10 times. This might be useful when working with 3rd party API’s.

Designing Reliable Jobs

->afterCommit() – dispatching the job only after the logic of your code finishes the execution. If something went wrong in your code logic, the job will not be dispatched. We can enforce this behaviour globally by setting the “after_commit” to “true” in queue.php

ShouldBeEncrypted interface – will hash the object passed in the __construct() method so will not be visible as a string inside the payload field in your database.

fathom analytics Follow @LaravelMagazine on Twitter →


Marian Pop

PHP / Laravel Developer. Writing and maintaining @LaravelMagazine. Host of "The Laravel Magazine Podcast". Pronouns: vi/vim.


Get latest news, tutorials, community articles and podcast episodes delivered to your inbox every Friday!

We'll never share your email address and you can opt out at any time.