Improve documentation of QofBackendProvider

git-svn-id: svn+ssh://svn.gnucash.org/repo/gnucash/trunk@13116 57a11ea4-9604-0410-9ed3-97b8803252fd
This commit is contained in:
Neil Williams 2006-02-05 10:58:38 +00:00
parent bd4211ad2e
commit 325d74fd90
3 changed files with 93 additions and 66 deletions

View File

@ -1,3 +1,14 @@
2006-02-05 Neil Williams <linux@codehelp.co.uk>
* lib/libqof/qof/qofbackend-p.h :
* lib/libqof/qof/qofbackend.h : Improving
documentation for QofBackendProvider
* lib/libqof/qof/qofchoice.c :
* lib/libqof/qof/qofchoice.h : Support logging.
* lib/libqof/qof/qoflog.c : Add qofchoice to default
log modules.
* lib/libqof/qof/gnc-engine-util.h : Line-wrapping tweak.
2006-02-04 David Hampton <david@dhcp-15.rainbolthampton.net>
* src/register/ledger-core/split-register.c:

View File

@ -27,7 +27,7 @@
/** @name Backend_Private
Pseudo-object defining how the engine can interact with different
back-ends (which may be SQL databases, or network interfaces to
remote GnuCash servers. File-io is just one type of backend).
remote QOF servers. File-io is just one type of backend).
The callbacks will be called at the appropriate times during
a book session to allow the backend to store the data as needed.
@ -50,6 +50,15 @@
#include "qofsession.h"
/**
* The backend_new routine sets the functions that will be used
* by the backend to perform the actions required by QOF. A
* basic minimum is session_begin, session_end, load and
* sync. Any unused functions should be set to NULL. If the
* backend uses configuration options, backend_new must ensure
* that these are set to usable defaults before returning. To use
* configuration options, load_config and get_config must also
* be defined.
*
* The session_begin() routine gives the backend a second initialization
* opportunity. It is suggested that the backend check that
* the URL is syntactically correct, and that it is actually
@ -80,48 +89,36 @@
* at load time; for SQL-based backends, it is acceptable for the
* backend to return no data.
*
* Thus, for example, for GnuCash, the postrges backend returns
* Thus, for example, the GnuCash postgres backend returned
* the account tree, all currencies, and the pricedb, as these
* are needed at startup. It does not have to return any
* transactions whatsoever, as these are obtained at a later stage
* when a user opens a register, resulting in a query being sent to
* were needed at startup. It did not have to return any
* transactions whatsoever, as these were obtained at a later stage
* when a user opened a register, resulting in a query being sent to
* the backend.
*
* (Its OK to send over transactions at this point, but one should
* (Its OK to send over entities at this point, but one should
* be careful of the network load; also, its possible that whatever
* is sent is not what the user wanted anyway, which is why its
* better to wait for the query).
*
* The begin() routine is called when the engine is about to
* make a change to a data structure. It can provide an advisory
* make a change to a data structure. It can provide an advisory
* lock on data.
*
* The commit() routine commits the changes from the engine to the
* backend data storage.
*
* The rollback() routine is used to revert changes in the engine
* and unlock the backend. For transactions it is invoked in one
* of two different ways. In one case, the user may hit 'undo' in
* the GUI, resulting in xaccTransRollback() being called, which in
* turn calls this routine. In this manner, xaccTransRollback()
* implements a single-level undo convenience routine for the GUI.
* The other way in which this routine gets invoked involves
* conflicting edits by two users to the same transaction. The
* second user to make an edit will typically fail in
* trans_commit_edit(), with trans_commit_edit() returning an error
* code. This causes xaccTransCommitEdit() to call
* xaccTransRollback() which in turn calls this routine. Thus,
* this routine gives the backend a chance to clean up failed
* commits.
* and unlock the backend.
*
* If the second user tries to modify a transaction that
* If the second user tries to modify an entity that
* the first user deleted, then the backend should set the error
* to ERR_BACKEND_MOD_DESTROY from this routine, so that the
* engine can properly clean up.
*
* The compile_query() method compiles a Gnucash query object into
* The compile_query() method compiles a QOF query object into
* a backend-specific data structure and returns the compiled
* query. For an SQL backend, the contents of the query object
* query. For an SQL backend, the contents of the query object
* need to be turned into a corresponding SQL query statement, and
* sent to the database for evaluation.
*
@ -130,8 +127,8 @@
*
* The run_query() callback takes a compiled query (generated by
* compile_query) and runs the query in across the backend,
* inserting the responses into the engine. The database will
* return a set of splits and transactions, and this callback needs
* inserting the responses into the engine. The database will
* return a set of splits and transactions and this callback needs
* to poke these into the account-group hierarchy held by the query
* object.
*
@ -140,27 +137,26 @@
* protocol, get an answer from the remote server, and push that
* into the account-group object.
*
* Note a peculiar design decision we've used here. The query
* callback has returned a list of splits; these could be returned
* directly to the caller. They are not. By poking them into the
* existing account hierarchy, we are essentially building a local
* cache of the split data. This will allow the GnuCash client to
* The returned list of entities can be used to build a local
* cache of the matching data. This will allow the QOF client to
* continue functioning even when disconnected from the server:
* this is because it will have its local cache of data to work from.
* this is because it will have its local cache of data from which to work.
*
* The sync() routine synchronizes the engine contents to the backend.
* This is done by using version numbers (hack alert -- the engine
* This should done by using version numbers (hack alert -- the engine
* does not currently contain version numbers).
* If the engine contents are newer than what's in the backend, the
* data is stored to the backend. If the engine contents are older,
* If the engine contents are newer than what is in the backend, the
* data is stored to the backend. If the engine contents are older,
* then the engine contents are updated.
*
* Note that this sync operation is only meant to apply to the
* current contents of the engine. This routine is not intended
* to be used to fetch account/transaction data from the backend.
* (It might pull new splits from the backend, if this is what is
* needed to update an existing transaction. It might pull new
* currencies (??))
* current contents of the engine. This routine is not intended
* to be used to fetch entity data from the backend.
*
* File based backends tend to use sync as if it was called dump.
* Data is written out into the backend, overwriting the previous
* data. Database backends should implement a more intelligent
* solution.
*
* The counter() routine increments the named counter and returns the
* post-incremented value. Returns -1 if there is a problem.
@ -191,13 +187,12 @@
* Cann the book commit() to complete the book partitioning.
*
* After the begin(), there will be a call to run_query(), followed
* probably by a string of account and transaction calls, and
* completed by commit(). It should be explicitly understood that
* the results of that run_query() precisely constitute the set of
* transactions that are to be moved between the initial and the
* new book. This specification can be used by a clever backend to
* avoid excess data movement between the server and the gnucash
* client, as explained below.
* probably by a string of object calls, and completed by commit().
* It should be explicitly understood that the results of that
* run_query() precisely constitute the set of objects that are to
* be moved between the initial and the new book. This specification
* can be used by a clever backend to avoid excess data movement
* between the server and the QOF client, as explained below.
*
* There are several possible ways in which a backend may choose to
* implement the book splitting process. A 'file-type' backend may
@ -208,32 +203,30 @@
*
* A 'database-type' backend has several interesting choices. One
* simple choice is to simply perform the run_query() as it
* normally would, and likewise treat the account and transaction
* edits as usual. In this scenario, the commit() is more or less
* a no-op. This implementation has a drawback, however: the
* run_query() may cause the transfer of a *huge* amount of data
* between the backend and the engine. For a large dataset, this
* is quite undesirable. In addition, there are risks associated
* with the loss of network connectivity during the transfer; thus
* a partition might terminate half-finished, in some indeterminate
* state, due to network errors. That might be difficult to
* recover from: the engine does not take any special transactional
* safety measures during the transfer.
* normally would, and likewise treat the object edits as usual.
* In this scenario, the commit() is more or less a no-op.
* This implementation has a drawback, however: the run_query() may
* cause the transfer of a <b>huge</b> amount of data between the backend
* and the engine. For a large dataset, this is quite undesirable.
* In addition, there are risks associated with the loss of network
* connectivity during the transfer; thus a partition might terminate
* half-finished, in some indeterminate state, due to network errors.
* It might be difficult to recover from such errors: the engine does
* not take any special safety measures during the transfer.
*
* Thus, for a large database, an alternate implementation
* might be to use the run_query() call as an opportunity to
* transfer transactions between the two books in the database,
* transfer entities between the two books in the database,
* and not actually return any new data to the engine. In
* this scenario, the engine will attempt to transfer those
* transactions that it does know about. It does not, however,
* need to know about all the other transactions that also would
* entities that it does know about. It does not, however,
* need to know about all the other entities that also would
* be transfered over. In this way, a backend could perform
* a mass transfer of transactions between books without having
* a mass transfer of entities between books without having
* to actually move much (or any) data to the engine.
*
*
* To support configuration options from the frontend, the backend
* can be passed a GHashTable - according to the allowed options
* can be passed a KvpFrame - according to the allowed options
* for that backend, using load_config(). Configuration can be
* updated at any point - it is up to the frontend to load the
* data in time for whatever the backend needs to do. e.g. an
@ -241,6 +234,10 @@
* loaded until the backend is about to save. If the configuration
* is updated by the user, the frontend should call load_config
* again to update the backend.
*
* Backends are responsible for ensuring that any supported
* configuration options are initialised to usable values.
* This should be done in the function called from backend_new.
*/
struct QofBackendProvider_s
@ -249,7 +246,7 @@ struct QofBackendProvider_s
const char * provider_name;
/** The access method that this provider provides, for example,
* http:// or postgres:// or rpc://, but without the :// at the end
* file:// http:// postgres:// or sqlite://, but without the :// at the end
*/
const char * access_method;
@ -261,7 +258,11 @@ struct QofBackendProvider_s
*/
gboolean partial_book_supported;
/** Return a new, initialized backend backend. */
/** Return a new, fully initialized backend.
*
* If the backend supports configuration, all configuration options
* should be initialised to usable values here.
* */
QofBackend * (*backend_new) (void);
/** \brief Distinguish two providers with same access method.
@ -316,7 +317,18 @@ struct QofBackend_s
QofBackendProvider *provider;
/** Document Me !!! what is this supposed to do ?? */
/** Detect if the sync operation will overwrite data
*
* File based backends tend to consider the original file
* as 'stale' immediately the data finishes loading. New data
* only exists in memory and the data in the file is completely
* replaced when qof_session_save is called. e.g. this routine can be
* used to detect if a Save As... operation would overwrite a
* possibly unrelated file. Not all file backends use this function.
*
* @return TRUE if the user may need to be warned about possible
* data loss, otherwise FALSE.
*/
gboolean (*save_may_clobber_data) (QofBackend *);
QofBackendError last_err;

View File

@ -185,6 +185,10 @@ qof_backend_get_config, qof_backend_option_foreach and qof_backend_load_config
are intended for either the backend or the frontend to retrieve the option data
from the frame or set new data.
Backends are loaded using QofBackendProvider via the function specified in
prov->backend_new. Before backend_new returns, you should ensure that your
backend is fully configured and ready for use.
@{
*/