recompiled.

This commit is contained in:
Marek Majkowski 2010-10-07 14:46:53 +01:00
parent 65aa89812d
commit b37aa98322
7 changed files with 206 additions and 132 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 764 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

View File

@ -14,14 +14,16 @@ Learning RabbitMQ, part 1 ("Hello world!")
Throughout this tutorial, we'll teach you the basic concepts required for
creating RabbitMQ applications. The tutorial will be illustrated with
code snippets written in [Python](http://www.python.org/). But don't worry if
you don't know this language - the core ideas are the same for other languages.
code snippets written in [Python](http://www.python.org/) and executed on Linux.
But don't worry if you don't know this language - the core ideas are the same
for other languages.
This tutorial consists of three parts:
* First we're going to write the simplest possible "Hello World" example.
* Next we'll try to use Rabbit as a simple "Work queue" server.
* Finally, we'll discuss the "Publish-subscribe" pattern.
* Next we'll try to use Rabbit as [a simple "Work queue" server](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-two.md).
* Finally, we'll discuss how to [broadcast a message](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-three.md).
You need to have RabbitMQ server installed to go through this tutorial.
If you haven't installed it yet you can follow the
@ -46,7 +48,8 @@ If you have installed RabbitMQ you should see something like:
> #### Where to get help
>
> If you're having trouble going through this tutorial you can post a message to
> [rabbitmq-discuss mailing list](https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss).
> [rabbitmq-discuss mailing list](https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss)
> or join the [#rabbitmq](irc://irc.freenode.net/rabbitmq) irc channel.
Introduction
@ -54,7 +57,7 @@ Introduction
RabbitMQ is a message broker. The principle idea is pretty simple: it accepts
and forwards messages. You can think about it as a post office: when you send
mail to the post box and you're pretty sure that mr postman will eventually
mail to the post box and you're pretty sure that Mr. Postman will eventually
deliver the mail to your recipient. Using this metaphor RabbitMQ is a post box,
post office and a postman.
@ -65,7 +68,7 @@ blobs of data - _messages_.
RabbitMQ uses a weird jargon, but it's simple once you'll get it. For example:
* _Producing_ means nothing more than sending. A program that sends messages
is a _producer_.
is a _producer_. We'll draw it like that, with "P":
<center><div class="dot_bitmap">
@ -79,16 +82,17 @@ RabbitMQ uses a weird jargon, but it's simple once you'll get it. For example:
is not bound by any limits, it can store how many messages you
like - it's essentially an infinite buffer. Many _producers_ can send
messages that go to the one queue, many _consumers_ can try to
receive data from one _queue_.
receive data from one _queue_. Queue'll be drawn as like that, with
its name above it:
<center><div class="dot_bitmap">
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/9bdb70c65ab8b2aa1f6b0b85c2931a54.png" alt="Dot graph" width="76" height="29" />
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/3c4ce95dcbad1b510cfd4c2ee86d8a35.png" alt="Dot graph" width="116" height="87" />
</div></center>
* _Consuming_ has a simmilar meaning to receiving. _Consumer_ is a program
that mostly waits to receive messages.
* _Consuming_ has a similar meaning to receiving. _Consumer_ is a program
that mostly waits to receive messages. On our drawings it's shown with "C":
<center><div class="dot_bitmap">
@ -108,16 +112,20 @@ sends a message and one that receives and prints it.
Our overall design will look like:
<center><div class="dot_bitmap">
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/b3f3e23007aca653c04def0f8e859d18.png" alt="Dot graph" width="363" height="125" />
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/8ba5b12cebea4a4a081e50e3789a5114.png" alt="Dot graph" width="338" height="125" />
</div></center>
Producer sends messages to the "test" queue. The consumer receives
messages from that queue.
> #### RabbitMQ libraries
>
> RabbitMQ speaks AMQP protocol. To use Rabbit you'll need a library that
> understands the same protocol as Rabbit. There is a choice of libraries
> for almost every programming language. Python it's not different and there is
> a bunch of libraries to choose from:
>
> * [py-amqplib](http://barryp.org/software/py-amqplib/)
> * [txAMQP](https://launchpad.net/txamqp)
> * [pika](http://github.com/tonyg/pika)
@ -125,9 +133,9 @@ Our overall design will look like:
> In this tutorial we're going to use `pika`. To install it you can use
> [`pip`](http://pip.openplans.org/) package management tool:
>
> $ sudo pip install -e git+http://github.com/tonyg/pika.git#egg=pika
> $ sudo pip install -e git+http://github.com/tonyg/pika.git#egg=pika
>
>If you don't have `pip`, you may want to install it.
> If you don't have `pip`, you may need to install it.
>
> * On Ubuntu:
>
@ -142,13 +150,14 @@ Our overall design will look like:
<center><div class="dot_bitmap">
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/090975cc54ab88a30c0bb4d47611b674.png" alt="Dot graph" width="278" height="125" />
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/89f8023288762c3383ef0ca36e039944.png" alt="Dot graph" width="253" height="125" />
</div></center>
Our first program `send.py` will send a single message to the queue.
The first thing we need to do is connect to RabbitMQ server.
The first thing we need to do is to establish a connection with
RabbitMQ server.
<div><pre><code class='python'>#!/usr/bin/env python
import pika
@ -159,27 +168,26 @@ connection = pika.AsyncoreConnection(pika.ConnectionParameters(
channel = connection.channel()</code></pre></div>
Whenever we send a message we need to make sure the recipient queue exists.
RabbitMQ will just trash the message if can't deliver it. So, we need to
create a queue to which the message will be delivered. Let's name this queue
_test_:
We're connected now. Next, before sending we need to make sure the
recipient queue exists. If we send a message to non-existing location,
RabbitMQ will just trash the message. Let's create a queue to which
the message will be delivered, let's name it _test_:
<div><pre><code class='python'>channel.queue_declare(queue='test')</code></pre></div>
At that point we're ready to send a message. Our first message will
contain a string _Hello World!_ and we want to send it to our _test_
queue.
just contain a string _Hello World!_, we want to send it to our
_test_ queue.
In RabbitMQ a message never goes directly to the queue, it always
In RabbitMQ a message never can be send directly to the queue, it always
needs to go through an _exchange_. But let's not get dragged by the
details - you can read more about _exchanges_ in third part of this
tutorial. All we need to know now is how to use a default exchange
identified by an empty string. That exchange is a special one that
details - you can read more about _exchanges_ in [third part of this
tutorial](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-three.md). All we need to know now is how to use a default exchange
identified by an empty string. This exchange is special - it
allows us to specify exactly to which queue the message should go.
The queue name is specified by the `routing_key` variable:
The queue name needs to be specified in the `routing_key` parameter:
<div><pre><code class='python'>channel.basic_publish(exchange='',
routing_key='test',
@ -188,14 +196,12 @@ print &quot; [x] Sent 'Hello World!'&quot;</code></pre></div>
[(full send.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/send.py)
### Receiving
<center><div class="dot_bitmap">
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/e8c961b5097209b7e18281754f6403a4.png" alt="Dot graph" width="278" height="125" />
<img src="http://github.com/rabbitmq/rabbitmq-tutorials/raw/master/_img/bc21efaa5981805f4038b3065e15e6b4.png" alt="Dot graph" width="253" height="125" />
</div></center>
@ -203,27 +209,26 @@ print &quot; [x] Sent 'Hello World!'&quot;</code></pre></div>
Our second program `receive.py` will receive messages from the queue and print
them on the screen.
The code responsible for connecting to Rabbit is the same as the previous example.
You can copy the first 7 lines.
Again, first we need to connect to RabbitMQ server. The code
responsible for connecting to Rabbit is the same as previously.
# ... connection code is the same, copy first 7 lines from send.py ...
Just like before, in the beginning we must make sure that the
queue exists. Creating a queue using `queue_declare` is idempotent - you can
run the command as many times you like, and only one queue will be created.
Next step, just like before, is to make sure that the
queue exists. Creating a queue using `queue_declare` is idempotent - we can
run the command as many times you like, and only one will be created.
<div><pre><code class='python'>channel.queue_declare(queue='test')</code></pre></div>
You may ask why to declare queue again - we have already declared it
in our previous code. We could have avoided that if we always run the
`send.py` program before this one. But we're not sure yet which
You may ask why to declare the queue again - we have already declared it
in our previous code. We could have avoided that if we were sure
that the queue already exists. For example if `send.py` program was
run before. But we're not yet sure which
program to run as first. In such case it's a good practice to repeat
declaring the queue in both programs.
> #### Listing queues
>
> Sometimes you may want to see what queues does RabbitMQ store and how many
> You may want to see what queues does RabbitMQ store and how many
> messages are in them. You can do it using the `rabbitmqctl` tool:
>
> $ sudo rabbitmqctl list_queues
@ -233,17 +238,19 @@ declaring the queue in both programs.
Receiving messages from the queue is a bit more complex. Whenever we receive
a message, a `callback` function is called. In our case
this function will print on the screen the contents of the message.
Receiving messages from the queue is more complex. It works by subscribing
a `callback` function to a queue. Whenever we receive
a message, this `callback` function is called by the Pika library.
In our case this function will print on the screen the contents of
the message.
<div><pre><code class='python'>def callback(ch, method, header, body):
print &quot; [x] Received %.20r&quot; % (body,)</code></pre></div>
Next, we need to tell RabbitMQ that this particular callback function is
interested in messages from our _test_ queue:
Next, we need to tell RabbitMQ that this particular callback function should
receive messages from our _test_ queue:
<div><pre><code class='python'>channel.basic_consume(callback,
queue='test',
@ -254,6 +261,8 @@ For that command to succeed we must be sure that a queue which we want
to subscribe to exists. Fortunately we're confident about that - we've
created a queue above - using `queue_declare`.
The `no_ack` parameter will be described [later on](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-two.md).
And finally, we enter a never-ending loop that waits for data and runs callbacks
whenever necessary.
@ -261,10 +270,56 @@ whenever necessary.
pika.asyncore_loop()</code></pre></div>
[(full receive.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/receive.py)
### Putting it all together
Full code for `send.py`:
<div><pre><code class='python'>#!/usr/bin/env python
import pika
connection = pika.AsyncoreConnection(pika.ConnectionParameters(
host='127.0.0.1',
credentials=pika.PlainCredentials('guest', 'guest')))
channel = connection.channel()
channel.queue_declare(queue='test')
channel.basic_publish(exchange='',
routing_key='test',
body='Hello World!')
print &quot; [x] Sent 'Hello World!'&quot;</code></pre></div>
[(send.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/send.py)
Full `receive.py` code:
<div><pre><code class='python'>#!/usr/bin/env python
import pika
connection = pika.AsyncoreConnection(pika.ConnectionParameters(
host='127.0.0.1',
credentials=pika.PlainCredentials('guest', 'guest')))
channel = connection.channel()
channel.queue_declare(queue='test')
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, header, body):
print &quot; [x] Received %.20r&quot; % (body,)
channel.basic_consume(callback,
queue='test',
no_ack=True)
pika.asyncore_loop()</code></pre></div>
[(receive.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/receive.py)
Now we can try out our programs. First, let's send a message using our
`send.py` program:
@ -278,10 +333,10 @@ Let's receive it:
[x] Received 'Hello World!'
Hurray! We were able to send our first message through RabbitMQ. As you might
have noticed, the `receive.py` program didn't exit. It will stay ready to
receive further messages. Try to run `send.py` in a new terminal!
have noticed, the `receive.py` program doesn't exit. It will stay ready to
receive further messages. Try to run `send.py` again in a new terminal!
We've learned how to send and receive a message from a named
queue. It's time to move on to part 2 of this tutorial and build a
simple _task queue_.
queue. It's time to move on to [part 2](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-two.md)
and build a simple _task queue_.

View File

@ -9,22 +9,22 @@ Learning RabbitMQ, part 3 (Broadcast)
</div></center>
In previous part of this tutorial we've learned how to create a task
queue. The idea behind a task queue is that a task should be delivered
to exactly one worker. In this part we'll do something completely
In [previous part](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-two.md) of this tutorial we've learned how
to create a task queue. The core assumption behind a task queue is that a task
is delivered to exactly one worker. In this part we'll do something completely
different - we'll try to deliver a message to multiple consumers. This
pattern is known as "publish-subscribe".
To illustrate this this tutorial, we're going to build a simple
logging system. It will consist of two programs - one will emit log
messages and one will receive them.
logging system. It will consist of two programs - first will emit log
messages and second will receive and print them.
In our logging system we'll every running copy of the receiver program
In our logging system every running copy of the receiver program
will be able to get the same messages. That way we'll be able to run one
receiver and direct the logs to disk. In the same time we'll be able to run
another reciver and see the same logs on the screen.
receiver and direct the logs to disk, in the same time we'll be able to run
another receiver and see the same logs on the screen.
Essentially, crated log messages are going to be broadcasted to all
Essentially, emitted log messages are going to be broadcasted to all
the receivers.
@ -37,15 +37,15 @@ in Rabbit.
Let's quickly remind what we've learned:
* _Producer_ is a name for user application that sends messages.
* _Producer_ is user application that sends messages.
* _Queue_ is a buffer that stores messages.
* _Consumer_ is a name for user application that receives messages.
* _Consumer_ is user application that receives messages.
The core idea behind the messaging model in Rabbit is that the
The core idea in the messaging model in Rabbit is that the
producer never sends any messages directly to the queue. Actually,
quite often the producer doesn't even know that a message won't be
delivered to any queue!
quite often the producer doesn't even know if a message will be
delivered to any queue at all!
Instead, the producer can only send messages to an _exchange_. An
exchange is a very simple thing. On one side it receives messages from
@ -64,7 +64,7 @@ defined by the _exchange type_.
There are a few exchange types available: `direct`, `topic`,
`headers` and `fanout`. We'll focus on the last one - the
fanout. Let's create an exchange of that type, and name it `logs`:
fanout. Let's create an exchange of that type, and call it `logs`:
<div><pre><code class='python'>channel.exchange_declare(exchange='logs',
type='fanout')</code></pre></div>
@ -90,7 +90,7 @@ queues it knows. And that's exactly what we need for our logger.
> ...done.
>
> You can see a few `amq.` exchanges. They're created by default, but with a
> bit of good luck you'll never need to use them.
> bit of luck you'll never need to use them.
<div></div>
@ -101,33 +101,31 @@ queues it knows. And that's exactly what we need for our logger.
> because we were using a default `""` _empty string_ (nameless) exchange.
> Remember how publishing worked:
>
> chnnel.basic_publish(exchange='',
> routing_key='test',
> body=message)
> channel.basic_publish(exchange='',
> routing_key='test',
> body=message)
>
> The _empty string_ exchange is a special exchange: every queue is connected
> to it using its queue name as a key. When you publish a message to the
> nameless exchange it will be routed to the queue with name specified
> by `routing_key`.
> The _empty string_ exchange is special: message is
> routed to the queue with name specified by `routing_key`.
Temporary queues
----------------
In previous tutorial parts we were using a queue which had a name -
`test` in first `test_dur` in second tutorial. Being able to name a
As you may remember previously we were using queues which had a specified name -
`test` in first `task_queue` in second tutorial. Being able to name a
queue was crucial for us - we needed to point the workers to the same
queue. Essentially, giving a queue a name is important when you don't
want to loose any messages if the consumer disconnects.
want to loose any messages when the consumer disconnects.
But that's not true for our logger. We do want to hear only about
currently flowing log messages, we do not want to hear the old
ones. To solve that problem we need two things.
First, whenever we connect the queue should be new and empty. To do it
we could just use random queue name, or, even better - let server to
choose a random unique queue name. We can do it by not supplying the
First, whenever we connect to Rabbit we need a fresh, empty queue. To do it
we could create a queue with a random name, or, even better - let server
choose a random queue name for us. We can do it by not supplying the
`queue` parameter to `queue_declare`:
<div><pre><code class='python'>result = channel.queue_declare()</code></pre></div>
@ -161,7 +159,7 @@ between exchange and a queue is called a _binding_.
queue=result.queue)</code></pre></div>
From now on the `logs` exchange will broadcast the messages also to
From now on the `logs` exchange will broadcast all the messages also to
our queue.
> #### Listing bindings
@ -201,7 +199,7 @@ channel.basic_publish(exchange='logs',
body=message)
print &quot; [x] Sent %r&quot; % (message,)</code></pre></div>
[(full emit_log.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/emit_log.py)
[(emit_log.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/emit_log.py)
As you see, we avoided declaring exchange. If the `logs` exchange
isn't created at the time this code is executed the message will be
@ -248,5 +246,8 @@ If you wish to see the logs on your screen, spawn a new terminal and run:
$ ./receive_logs.py
And of course, to emit logs type:
$ ./emit_log.py

View File

@ -11,15 +11,16 @@ Learning RabbitMQ, part 2 (Task Queue)
In the first part of this tutorial we've learned how to send messages
to and receive from a named queue. In this part we'll create a
_Task Queue_ to distribute time-consuming work across multiple
In the [first part of this tutorial](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-one.md) we've learned how to send
and receive messages from a named queue. In this part we'll create a
_Task Queue_ that will be used to distribute time-consuming work across multiple
workers.
The main idea behind Task Queues (aka: _Work Queues_) is to avoid
doing resource intensive tasks immediately. Instead, we encapsulate
the task in a message and put it on to the queue. A worker process
will pop the task and eventually execute the job.
doing resource intensive tasks immediately. Instead we schedule a task to
be done later on. We encapsulate a _task_ as a message and save it to
the queue. A worker process running in a background will pop the tasks
and eventually execute the job.
This concept is especially useful in web applications where it's
impossible to handle a complex task during a short http request
@ -30,13 +31,13 @@ Preparations
------------
In previous part of this tutorial we were sending a message containing
"Hello World!" string. In this part we'll be sending strings that
stand for complex tasks. As we don't have any real hard tasks, like
image to be resized or pdf files to be rendered, let's fake it by just
"Hello World!" string. Now we'll be sending strings that
stand for complex tasks. We don't have any real hard tasks, like
image to be resized or pdf files to be rendered, so let's fake it by just
pretending we're busy - by using `time.sleep()` function. We'll take
the number of dots in the string as a complexity, every dot will
account for one second of "work". For example, a fake task described
by `Hello!...` will take three seconds.
by `Hello!...` string will take three seconds.
We need to slightly modify our `send.py` code, to allow sending
arbitrary messages from command line:
@ -64,16 +65,26 @@ def callback(ch, method, header, body):
Round-robin dispatching
-----------------------
The main advantage of pushing fat tasks through the Task Queue is the
One of the advantages of using Task Queue is the
ability to easily parallelize work. If we have too much work for us to
handle, we can just add more workers and scale easily.
First, let's try to run two `worker.py` scripts in the same time. They
will both try to get messages from the queue, but how exactly? Let's
will both get messages from the queue, but how exactly? Let's
see.
You need three consoles open. First two to run `worker.py`
script. These consoles will be our two consumers - C1 and C2. On the
script. These consoles will be our two consumers - C1 and C2.
shell1$ ./worker.py
[*] Waiting for messages. To exit press CTRL+C
&nbsp;
shell2$ ./worker.py
[*] Waiting for messages. To exit press CTRL+C
On the
third one we'll be publishing new tasks. Once you've started the
consumers you can produce few messages:
@ -83,7 +94,7 @@ consumers you can produce few messages:
shell3$ ./new_task.py Fourth message....
shell3$ ./new_task.py Fifth message.....
And let's see what is delivered to our workers:
Let's see what is delivered to our workers:
shell1$ ./worker.py
[*] Waiting for messages. To exit press CTRL+C
@ -91,6 +102,8 @@ And let's see what is delivered to our workers:
[x] Received 'Third message...'
[x] Received 'Fifth message.....'
&nbsp;
shell2$ ./worker.py
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'Second message..'
@ -112,25 +125,27 @@ we will loose the message it was just processing. We'll also loose all
the messages that were dispatched to this particular worker and not
yet handled.
We don't want to loose any task. If a workers dies, we'd like the task
But we don't want to loose any task. If a workers dies, we'd like the task
to be delivered to another worker.
In order to make sure a message is never lost by the worker, RabbitMQ
supports message _acknowledgments_. It's basically an information,
sent back from the consumer which tell Rabbit that particular message
In order to make sure a message is never lost, RabbitMQ
supports message _acknowledgments_. It's an information,
sent back from the consumer which tells Rabbit that particular message
had been received, fully processed and that Rabbit is free to delete
it.
If consumer dies without sending ack, Rabbit will understand that a
message wasn't processed fully and will dispatch it to another
message wasn't processed fully and will redispatch it to another
consumer. That way you can be sure that no message is lost, even if
the workers occasionaly die.
the workers occasionally die.
But there aren't any message timeouts, Rabbit will redispatch the
message again only when the worker connection dies.
There aren't any message timeouts, Rabbit will redispatch the
message only when the worker connection dies. It's fine if processing
a message takes even very very long time.
Acknowledgments are turned on by default. Though, in previous
examples we had explicitly turned them off: `no_ack=True`. It's time
Message acknowledgments are turned on by default. Though, in previous
examples we had explicitly turned them off via `no_ack=True` flag. It's time
to remove this flag and send a proper acknowledgment from the worker,
once we're done with a task.
@ -145,7 +160,7 @@ channel.basic_consume(callback,
Using that code we may be sure that even if you kill a worker using
CTRL+C while it was processing a message, it will won't be lost. Soon
CTRL+C while it was processing a message, nothing will be lost. Soon
after the worker dies all unacknowledged messages will be redispatched.
> #### Forgotten acknowledgment
@ -172,7 +187,8 @@ task isn't lost. But our tasks will still be lost if RabbitMQ server
dies.
When RabbitMQ quits or crashes it will forget the queues and messages
unless you tell it not to.
unless you tell it not to. Two things are required to make sure that
messages aren't lost: we need to mark both a queue and messages as durable.
First, we need to make sure that Rabbit will never loose our `test`
queue. In order to do so, we need to declare it as _durable_:
@ -183,32 +199,34 @@ Although that command is correct by itself it won't work in our
setup. That's because we've already defined a queue called `test`
which is not durable. RabbitMQ doesn't allow you to redefine a queue
with different parameters and will return hard error to any program
that tries to do that. But there is a quick workaround - let's just
declare a queue with different name, for example `test_dur`:
that tries to do that. But there is a quick workaround - let's
declare a queue with different name, for example `task_queue`:
channel.queue_declare(queue='test_dur', durable=True)
channel.queue_declare(queue='task_queue', durable=True)
This `queue_declare` change needs to be applied to both the producer
and consumer code.
At that point we're sure that the `test_dur` queue won't be lost even
if RabbitMQ dies. Now we need to make our messages persistent - by
suppling a `delivery_mode` header with a value `2`:
At that point we're sure that the `task_queue` queue won't be lost even
if RabbitMQ dies. Now we need to mark our messages as persistent - by
supplying a `delivery_mode` header with a value `2`.
channel.basic_publish(exchange='', routing_key="test_dur",
channel.basic_publish(exchange='', routing_key="task_queue",
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
Marking messages as persistent doesn't really guarantee that a message
will survive. Although it tells Rabbit to save message to the disk,
there is still a time window when Rabbit accepted a message and
haven't it yet saved. Also, Rabbit doesn't do `fsync(2)` for every
message - it may be just saved to caches and not really written to the
disk. The persistence guarantees are weak, but it's more than enough
for our task queue. If you need stronger guarantees you can wrap the
publishing code in a _transaction_.
> #### Note on message persistence
>
> Marking messages as persistent doesn't really guarantee that a message
> won't be lost. Although it tells Rabbit to save message to the disk,
> there is still a short time window when Rabbit accepted a message and
> haven't saved it yet. Also, Rabbit doesn't do `fsync(2)` for every
> message - it may be just saved to caches and not really written to the
> disk. The persistence guarantees aren't strong, but it's more than enough
> for our simple task queue. If you need stronger guarantees you can wrap the
> publishing code in a _transaction_.
Fair dispatching
@ -233,7 +251,7 @@ every consumer.
In order to defeat that we may use `basic.qos` method with the
`prefetch_count` settings. That allows us to tell Rabbit not to give
`prefetch_count=1` settings. That allows us to tell Rabbit not to give
more than one message to a worker at a time. Or, in other words, don't
dispatch a new message to a worker until it has processed and
acknowledged previous one.
@ -256,7 +274,7 @@ connection = pika.AsyncoreConnection(pika.ConnectionParameters(
credentials=pika.PlainCredentials('guest', 'guest')))
channel = connection.channel()
channel.queue_declare(queue='test_dur', durable=True)
channel.queue_declare(queue='task_queue', durable=True)
message = ' '.join(sys.argv[1:]) or &quot;Hello World!&quot;
channel.basic_publish(exchange='', routing_key='test',
@ -266,7 +284,7 @@ channel.basic_publish(exchange='', routing_key='test',
))
print &quot; [x] Sent %r&quot; % (message,)</code></pre></div>
[(full new_task.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/new_task.py)
[(new_task.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/new_task.py)
And our worker:
@ -295,12 +313,12 @@ channel.basic_consume(callback,
pika.asyncore_loop()</code></pre></div>
[(full worker.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/worker.py)
[(worker.py source)](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/worker.py)
Using message acknowledgments and `prefetch_count` you may set up
quite a decent work queue. The durabiltiy options will let the tasks
to survive even if Rabbit is killed.
quite a decent work queue. The durability options let the tasks
to survive even if Rabbit is restarted.
Now we can move on to part 3 of this tutorial and learn how to
distribute the same message to many consumers.
Now we can move on to [part 3](http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/tutorial-three.md) and learn how to
deliver the same message to many consumers.