Queue
Queue moves work out of the current request. Queue fits tasks that should be retried, delayed, rate limited, retained, or processed by a provider-native queue instead of running inline.
ViteHub discovers queues from server/queues/**.
Define a queue with createQueue(options?)(handler) or defineQueue(handler, options?), then enqueue work with runQueue().
Getting started
Install the package
pnpm add https://pkg.pr.new/nuxt-hub/agent/@vitehub/queue@main @platformatic/job-queue
pnpm add https://pkg.pr.new/nuxt-hub/agent/@vitehub/queue@main
pnpm add https://pkg.pr.new/nuxt-hub/agent/@vitehub/queue@main @vercel/queue
pnpm add https://pkg.pr.new/nuxt-hub/agent/@vitehub/queue@main @netlify/async-workloads
pnpm add https://pkg.pr.new/nuxt-hub/agent/@vitehub/queue@main @upstash/qstash
pnpm add https://pkg.pr.new/nuxt-hub/agent/@vitehub/queue@main
Memory processes jobs in-process. The memory provider does not require an extra npm package.
Configure a provider
queue.provider for Platformatic, QStash, or explicit overrides.export default defineNuxtConfig({
modules: ['@vitehub/queue/nuxt'],
queue: {
provider: 'platformatic',
},
})
export default defineNuxtConfig({
modules: ['@vitehub/queue/nuxt'],
})
export default defineNuxtConfig({
modules: ['@vitehub/queue/nuxt'],
})
export default defineNuxtConfig({
modules: ['@vitehub/queue/nuxt'],
})
export default defineNuxtConfig({
modules: ['@vitehub/queue/nuxt'],
queue: {
provider: 'qstash',
token: process.env.QSTASH_TOKEN!,
destination: 'https://example.com/api/queue/welcome',
},
})
export default defineNuxtConfig({
modules: ['@vitehub/queue/nuxt'],
queue: {
provider: 'memory',
},
})
Define a queue
Create a queue in server/queues/**. The file name becomes the queue name.
import { createQueue } from '@vitehub/queue'
export default createQueue({
cache: false,
})(async (job) => {
return {
id: job.id,
queued: true,
payload: job.payload,
}
})
Send a message
Import runQueue() from @vitehub/queue and pass the queue name with a payload.
import { readBody } from 'h3'
import { runQueue } from '@vitehub/queue'
export default defineEventHandler(async (event) => {
const body = await readBody<{ email?: string }>(event)
return runQueue('welcome-email', {
id: `welcome-${Date.now()}`,
payload: {
email: body.email,
},
})
})
Reach for the native handle only when you need it
Most apps only need runQueue(). Use getQueue() when you need provider-native methods such as batch sends, polling clients, or signature helpers.
import { getQueue } from '@vitehub/queue'
export default defineEventHandler(async () => {
const queue = await getQueue('welcome-email')
return {
provider: queue.provider,
}
})
Public API
| Function | Use it for |
|---|
| createQueue(options?)(handler) | Register one named queue under server/queues/**. |
| defineQueue(handler, options?) | Register one named queue with the direct (handler, options?) form. |
| runQueue(name, { id?, payload, ...options }) | Enqueue work without using the provider-native client directly. |
| getQueue(name?) | Resolve the provider-native queue handle when you need advanced methods. |
Type reference
QueueJob<TPayload>
The handler receives a QueueJob object with these fields:
| Field | Type | Description |
|---|---|---|
id | string | Unique job identifier, set when enqueuing. |
payload | TPayload | The data you passed to runQueue(). |
attempts | number | How many times the job has been attempted. |
signal | AbortSignal | Abort signal tied to the job timeout. |
The handler can return any value or nothing at all. ViteHub does not enforce a return type.
QueueDefinitionOptions
The options object passed to createQueue() or the second argument to defineQueue() combines portable queue behavior with the provider-specific overrides that ViteHub wires directly:
| Option | Type | Description |
|---|---|---|
cache | boolean | Enable or disable client caching for this queue. Defaults to true. |
callbackOptions | VercelQueueCallbackOptions | Per-queue Vercel callback settings forwarded to queue.callback(). |
concurrency | number | Maximum parallel message processing. |
config | NetlifyAsyncWorkloadConfig | Per-queue Netlify Async Workloads handler config forwarded to queue.createHandler(). |
consumer | string | Per-queue Vercel consumer name. Defaults to "default". |
destination | string | Per-queue QStash callback destination that overrides queue.destination. |
onError | (error, message) => void | Called when a message fails. |
verifyQStashSignature() in the callback route that receives QStash requests.How configuration works
Queue has one global layer and one queue-local layer:
- Top-level
queueconfig innitro.config.tsselects the provider and sets app-wide integration defaults. createQueue(options?)(handler)anddefineQueue(handler, options?)configure one queue definition with portable behavior plus the provider-specific overrides ViteHub owns for Vercel, Netlify, and QStash.
runQueue() is the enqueue call. Pass the message payload and delivery options for one send. getQueue() resolves the provider-native handle when you need advanced methods.
export default defineNitroConfig({
queue: {
provider: 'platformatic',
},
})
export default createQueue({
cache: false,
})(async (job) => {
return {
email: job.payload?.email,
}
})