From the previous section, we already got a taste of what it is like to use the ORM API. Next we will look at what more we can do with it.
During our journey, the several methods we encountered used API decorators like @api.multi
. These are important for the server to know how to handle the method. Let's recap the ones available and when they should be used.
The @api.multi
decorator is used to handle recordsets with the new API and is the most frequently used. Here self
is a recordset, and the method will usually include a for
loop to iterate it.
In some cases, the method is written to expect a singleton: a recordset containing no more than one record. The @api.one
decorator was deprecated as of 9.0 and should be avoided. Instead we should still use @api.multi
and add to the method code a line with self.ensure_one()
, to ensure it is a singleton.
As mentioned, the @api.one
decorator is deprecated but is still supported. For completeness, it might be worth knowing that it wraps the decorated method, feeding it one record at a time, doing the recordset iteration for. In our method self
is guaranteed to be a singleton. The return values of each individual method call are aggregated as a list and returned.
The @api.model
decorates a class-level static method, and it does not use any recordset data. For consistency, self
is still a recordset, but its content is irrelevant. Note that this type of method cannot be used from buttons in the user interface.
A few other decorators have more specific purposes and are to be used together with the decorators described earlier:
@api.depends(fld1,...)
is used for computed field functions to identify on what changes the (re)calculation should be triggered@api.constrains(fld1,...)
is used for validation functions to identify on what changes the validation check should be triggered@api.onchange(fld1,...)
is used for on change functions to identify the fields on the form that will trigger the actionIn particular, the onchange
methods can send a warning message to the user interface. For example, this could warn the user that the product quantity just entered is not available in stock, without preventing the user from continuing. This is done by having the method return a dictionary describing the warning message:
return { 'warning': { 'title': 'Warning!', 'message': 'You have been warned'} }
We have learned about the standard methods provided by the API but there uses don't end there! We can also extend them to add custom behavior to our models.
The most common case is to extend the create()
and write()
methods. This can be used to add logic to be triggered whenever these actions are executed. By placing our logic in the appropriate section of the custom method, we can have the code run before or after the main operations are executed.
Using the TodoTask
model as an example, we can make a custom create()
, which would look like this:
@api.model def create(self, vals): # Code before create: can use the `vals` dict new_record = super(TodoTask, self).create(vals) # Code after create: can use the `new_record` created return new_record
A custom write()
would follow this structure:
@api.multi def write(self, vals): # Code before write: can use `self`, with the old values super(TodoTask, self).write(vals) # Code after write: can use `self`, with the updated values return True
These are common extension examples, but of course any standard method available for a model can be inherited in a similar way to add our custom logic to it.
These techniques open up a lot of possibilities, but remember that other tools are also available that can be better suited for common specific tasks:
write
calls.@api.constraints(fld1,fld2,...)
. These are like computed fields but, instead of computing values, they are expected to raise errors.We have seen the most important model methods used to generate recordsets and how to write on them. But there are a few more model methods available for more specific actions, as shown here:
read([fields])
is similar to the browse
method, but instead of a recordset, it returns a list of rows of data with the fields given as its argument. Each row is a dictionary. It provides a serialized representation of the data that can be sent through RPC protocols and is intended to be used by client programs and not in server logic.search_read([domain], [fields], offset=0, limit=None, order=None)
performs a search operation followed by a read on the resulting record list. It is intended to be used by RPC clients and saves them the extra round trip needed when doing a search
followed by a read
on the results.load([fields], [data])
is used to import data acquired from a CSV file. The first argument is the list of fields to import, and it maps directly to a CSV top row. The second argument is a list of records, where each record is a list of string values to parse and import, and it maps directly to the CSV data rows and columns. It implements the features of CSV data import described in
Chapter 4
, Module Data, like the external identifiers support. It is used by the web client Import feature. It replaces the deprecated import_data
method.export_data([fields], raw_data=False)
is used by the web client Export function. It returns a dictionary with a data key containing the data; a list of rows. The field names can use the .id
and /id
suffixes used in CSV files, and the data is in a format compatible with an importable CSV file. The optional raw_data
argument allows for data values to be exported with their Python types, instead of the string representation used in CSV.The following methods are mostly used by the web client to render the user interface and perform basic interaction:
name_get()
: This returns a list of (ID, name)
tuples with the text representing each record. It is used by default for computing the display_name
value, providing the text representation of relation fields. It can be extended to implement custom display representations, such as displaying the record code and name instead of only the name.name_search(name='', args=None, operator='ilike', limit=100)
returns a list of (ID, name)
tuples, where the display name matches the text in the name
argument. It is used in the UI while typing in a relation field to produce the list with the suggested records matching the typed text. For example, it is used to implement product lookup both by name and by reference, while typing in a field to pick a product.name_create(name)
creates a new record with only the title name to use for it. It is used in the UI for the "quick-create" feature, where you can quickly create a related record by just providing its name. It can be extended to provide specific defaults for the new records created through this feature.default_get([fields])
returns a dictionary with the default values for a new record to be created. The default values may depend on variables such as the current user or the session context.fields_get()
is used to describe the model's field definitions, as seen in the View Fields option of the developer menu.fields_view_get()
is used by the web client to retrieve the structure of the UI view to render. It can be given the ID of the view as an argument or the type of view we want using view_type='form'
. For example, you might try this: rset.fields_view_get(view_type='tree')
.Python has a command-line interface that is a great way to explore its syntax. Similarly, Odoo also has an equivalent feature, where we can interactively try out commands to see how they work. That is the shell
command.
To use it, run Odoo with the shell
command and the database to use, as shown here:
$ ./odoo-bin shell -d todo
You should see the usual server start up sequence in the terminal until it stops on a >>>
Python prompt waiting for your input. Here, self
will represent the record for the Administrator
user, as you can confirm typing the following:
>>> self res.users(1,) >>> self._name 'res.users' >>> self.name u'Administrator'
In the preceding session, we do some inspection on our environment. The self
represents a res.users
recordset containing only the record with ID 1
. We can also confirm the recordset's model name inspecting self._name
, and get the value for the record's name
field, confirming that it is the Administrator
user.
As with Python, you can exit the prompt using Ctrl + D . This will also close the server process and return to the system shell prompt.
The shell feature was added in version 9.0. For version 8.0 there is a community back-ported module to add it. Once downloaded and included in the addons path, no further installation is necessary. It can be downloaded from https://www.odoo.com/apps/modules/8.0/shell/ .
The server shell provides a self
reference identical to what you would find inside a method of the Users model, res.users
.
As we have seen, self
is a recordset. Recordsets carry with them an environment information, including the user browsing the data and additional context information, such as the language and the time zone. This information is important and guage or time zone.
We can start inspecting our current environment with:
>>> self.env <openerp.api.Environment object at 0xb3f4f52c>
The execution environment in self.env
has the following attributes available:
env.cr
is the database cursor being usedenv.uid
is the ID for the session userenv.user
is the record for the current userenv.context
is an immutable dictionary with a session contextThe environment also provides access to the registry where all installed models are available. For example, self.env['res.partner']
returns a reference to the Partners model. We can then use search()
or browse()
on it to retrieve recordsets:
>>> self.env['res.partner'].search([('name', 'like', 'Ag')]) res.partner(7, 51)
In this example, a recordset for the res.partner
model contains two records, with IDs 7
and 51
.
The environment is immutable, and so it can't be modified. But we can create a modified environment and then run actions using it.
These methods can be used for that:
env.sudo(user)
is provided with a user record, and returns an environment with that user. If no user is provided, the Administrator
superuser will be used, which allows running specific queries bypassing security rules.env.with_context(dictionary)
replaces the context with a new one.env.with_context(key=value,...)
modified the current context setting values for some of its keys.Additionally, we have the env.ref()
function, taking a string with an external identifier and returns a record for it, as shown here:
>>> self.env.ref('base.user_root') res.users(1,)
Database writing operations are executed in the context of a database transaction. Usually, we don't have to worry about this as the server takes care of that while running model methods.
But in some cases, we may need a finer control over the transaction. This can be done through the database cursor self.env.cr
, as shown here:
self.env.cr.commit()
commits the transaction's buffered write operationsself.env.savepoint()
sets a transaction savepoint to rollback toself.env.rollback()
cancels the transaction's write operations since the last savepoint, or all if no savepoint was createdWith the cursor execute()
method, we can run SQL directly in the database. It takes a string with the SQL statement to run and a second optional argument with a tuple or list of values to use as parameters for the SQL. These values will be used where %s
placeholders are found.
If you're using a SELECT
query, records should then be fetched. The fetchall()
function retrieves all the rows as a list of tuples
, and dictfetchall()
retrieves them as a list of dictionaries, as shown in the following example:
>>> self.env.cr.execute("SELECT id, login FROM res_users WHERE login=%s OR id=%s", ('demo', 1)) >>> self.env.cr.fetchall() [(4, u'demo'), (1, u'admin')]
It's also possible to run Data Manipulation Language (DML) instructions such as UPDATE
and INSERT
. Since the server keeps data caches, they may become inconsistent with the actual data in the database. Because of that, while using raw DML, the caches should be cleared afterward by using self.env.invalidate_all()
.