Remove obsolete documentation texts.

Those are so insanely obsolete, it hurts our eyes.

git-svn-id: svn+ssh://svn.gnucash.org/repo/gnucash/trunk@21568 57a11ea4-9604-0410-9ed3-97b8803252fd
This commit is contained in:
Christian Stimming 2011-11-14 19:43:20 +00:00
parent 0992192e39
commit 411261ae17
10 changed files with 10 additions and 480 deletions

View File

@ -22,7 +22,6 @@ tips_DATA = tip_of_the_day.list
EXTRA_DIST = \
${doc_DATA} \
misc-notes.txt \
README.build-system \
README.HBCI \
README.OFX \
@ -30,10 +29,9 @@ EXTRA_DIST = \
TRANSLATION_HOWTO \
build-aix.txt \
build-solaris.txt \
generic_objects.txt \
gnome-hackers.txt \
gnucash.1.in \
gtkrc-2.0.gnucash \
misc-notes.txt \
tip_of_the_day.list.in
## We borrow guile's convention and use @-...-@ as the substitution

View File

@ -1,14 +0,0 @@
This document attempts to summerize discussion on gnucash-devel about
generic objects in the engine. This discussion took place between
2001-11-17 and ... (in case you want to find the archives) and the
subject was "GncBusiness v. GNCSession".
One part of the problem, explained:
> > That is the whole point. The problem is that there is no generic
> > hooks into the GNCSession to store the GNCEntityTable for each
> > object-type; there is no hook in the GNCBook to store the object
> > tables (list of existing Customers, Vendors, Invoices, etc); there
> > is no hook in the Backend structure to load or save these objects;
> > there is no hook in the Query structure to search these objects.

View File

@ -1,34 +0,0 @@
-*-text-*-
This file is intended to contain information for those interested in
working on the GNOME bits of GnuCash.
Memory Management (care with reference counting):
-------------------------------------------------
I was unsure about when you're supposed to _unref widgets, etc., and
getting this right is critical to avoiding memory leaks on the one
hand and dangling pointers on the other. So I asked on the gtk list,
and here was the result:
On 16 Aug 1999, Rob Browning wrote:
>
> I've been poking around the gtk web site and in the docs for
> information on when you're supposed to call gtk_widget_unref. I want
> to make sure I'm handling this right so I don't introduce memory
> leaks, but so far I haven't found anything describing the rules.
> Actually I'd like to know what the guidelines are for all the *_unref
> functions...
>
Read gtk+/docs/refcounting.txt (or something like that).
Also I think some babble about object finalization at
http://pobox.com/~hp/gnome-app-devel.html (follow link to sample
chapters) might be helpful.
Basically you have to unref a widget you never use, but if you put it
in a container the container "assumes" the initial refcount of 1 and
the widget will be deleted along with the container.
Havoc

View File

@ -5,29 +5,25 @@ SUBDIRS = \
EXTRA_DIST = \
README \
TODO-schedxactions \
TODO-sixtp \
backup.txt \
books.txt \
budget.txt \
callgrind.txt \
constderv.html \
currencies.txt \
doxygen.cfg.in \
doxygen_main_page.c \
finderv.html \
finutil.html \
plugin.txt \
tax.txt \
doxygen_main_page.c \
doxygen.cfg.in \
TODO-schedxactions \
TODO-sixtp \
backend-api.txt \
backend-errors.txt \
books.txt \
currencies.txt \
generic-druid-framework.txt \
guid.txt \
loans.txt \
lots.txt \
multicurrency-discussion.txt \
netlogin.txt \
guid.txt \
qif.txt \
generic-druid-framework.txt \
tax.txt \
user-prefs-howto.txt \
python-bindings-doxygen.py

View File

@ -1,85 +0,0 @@
/** \page backendapi QOF Backend Design
Derek Atkins
<derek@ihtfp.com>
Neil Williams
<linux@codehelp.co.uk>
Created: 2002-10-07
Updated: 2005-05-22
API: \ref Backend
\section outline Outline:
The Backend Query API allows caching of a Query (meaning the Backend
does not have to recompile the Query every time the query is executed).
\section newobjects New QOF Objects
The engine has a set of APIs to load new data types into the engine.
The Backends use this as well. There is a noticeable difference
between QOF and GnuCash: GnuCash extends the backend by defining routines
for each object. QOF backends handle each object equally and generically.
A new object is declared using the base QOF types and zero or more
references to other QOF objects. Each QOF backend must handle each
base QOF type and references to other objects - usually by storing
the type of the referenced object and the GUID of the referenced object
as the value and the GUID of the original object as the key in a
GHashTable or other lookup mechanism within the backend. See
::QofInstanceReference.
\section backendqueries Handling Queries:
The backend query API is broken into three pieces:
\subsection querycompile Compiling a QofQuery
\verbatim
gpointer (*query_compile)(QofBackend* be, QofQuery* query);
\endverbatim
compiles a QofQuery* into whatever QofBackend language is necessary.
\subsection queryfree Free the query
\verbatim
void (*query_free)(Backend* be, gpointer query);
\endverbatim
frees the compiled Query (obtained from the query_compile method).
\subsection queryrun Run the query
\verbatim
void (*query_run)(Backend* be, gpointer query);
\endverbatim
executes the compiled Query and inserts the responses into the
engine. It will search for the type corresponding to the
Query search_for type: gncQueryGetSearchFor(). Note that the
search type CANNOT change between a compile and the execute,
but the query infrastructure maintains that invariant.
In this manner, a Backend (e.g. the dbi backend) can compile the
Query into its own format (e.g. a SQL expression) and then use the
pre-compiled expression every run instead of rebuilding the
expression.
There is an implementation issue in the case of Queries across
multiple Books. Each book could theoretically be in a different
backend, which means we need to tie the compiled query to the book's
Backend for which it was compiled. This is an implementation detail,
and not even a challenging one, but it needs to be clearly
acknowledged up front.
\section backendload When to load data?
Data loads into the engine at two times, at start time and at query
time. Loading data during queries is discussed above. This section
discusses data loaded at startup.
\verbatim
void session_load(QofBackend*, QofBook*);
\endverbatim
This one API loads all the necessary "start-time" data, There is no
need to have multiple APIs for each of the data types loaded at start-time.
*/
============================== END OF DOCUMENT =====================

View File

@ -1,94 +0,0 @@
/** \page backenderrors Handling Backend Communications Errors
Architectural Discussion
December 2001
Proposed/Reviewed, Linas Vepstas, Dave Peticolas
Updated and adapted for general QOF usage, May 2005
Neil Williams <linux@codehelp.co.uk>
API: \ref Backend
\section backendproblem Problem:
What to do if a serious error occurs in a backend while
QOF is being used? For example, what happens if the connection
to a SQL server is lost, because the SQL server has died, and/or
because there is a network problem (unplugged ethernet cable, etc.)
With the QSF backend, what happens if the write operation fails?
(disk full, permission failure, etc.)
\section backendgeneric The "Generic Handler, Report it to the User" idea:
Go ahead and close the connection / clean up, but then return to
QOF in some nice way, use qof_session_get_error to report the error
to any GUI using program-specific handlers and then
allow the user to initiaite a new session (or maybe try to do it
automatically): and do all this without deleting any data.
I like this for several reasons:
- its generic, it can handle any backend error anywhere in the code.
You don't have to second-guess based on whether some recent query
may or might not have completed.
- I believe that reconnect will be quicker, because you won't need
reload piles of accounts and transactions.
- If the user can't reconnect, then they can always save to a file.
This can be a double bonus if done right: e.g. user works on laptop,
saves to file, takes laptop to airport, works off-line, and then
syncs her changes back up when she goes on-line again.
\section backendresults Discussion:
Should the backend try reconnecting first, or just go ahead and
return an error condition immediately? If the latter, then the
current backend error-handling can just stay as it is and the gui
codes need to add checks in several places, right?
The problem with automatic reconnect from within the backend is that you
don't know quite where to restart... or rather, you have trouble getting
to the right place to restart.
You can't just re-login, and reissue the commit. You really need
to rewind to the begining of the subroutine. How can you do this?
Alternative 1) wrap the routine and retry three times.
Alternative 2) throw an error, let some much higher layer catch it.
Well, approach 1) seems reasonable... until you think about what happens
if three retries doesn't cut it: then you have to throw an error
anyway, and hope the higher layer deals with it. So even if you
implement 1), you *still* have to implement 2) anyway.
So my attitude is to skip doing 1 for now (maybe we can add it later)
and just make sure that when we "throw" the error, it really does behave
like a throw should behave, and short-cuts its way up to where its
caught. The catcher should probably be a few strategic places in the
GUI, like wherever a QofQuery() is issued, and wherever an
object is edited.
What's the point of doing 2 cleanly? Because I suspect that most
network / filesystem errors won't be automatically recoverable.
Most likely, either someone tripped over an ethernet cable, or the
server crashed or the disc is full and you gotta call the sysadmin on
the phone or clear out some files, etc. The goal is not to crash the
client when the backend reports an error, but rather let the user
continue to work.
\section errorreport How to Report Errors to the GUI
How would the engine->GUI error reporting happen? A direct callback?
Or having the GUI always check for session errors?
We should use the session error mechanism for reporting these errors.
Note that the API allows a simple 'try-throw-catch' style error
handling in C. Because we don't/can't unwind the stack as a true
'throw' would, we need to make sure that when we "throw" the error,
it emulates this as best it can: it short-cuts its way up and out of
the engine, to where its caught in the GUI.
If there are a *lot* of places where these calls are
issued, simplify things by implementing your own callback mechanism.
*/
============================== END OF DOCUMENT =====================

View File

@ -1,20 +0,0 @@
/** \page networkoverview GnuCash Network Login
A quick sketch of how network login works when using the xml-web backend
for communicating with a gnucash server.
-# User enters in a URL Location via GUI dialogue. Location is assumed
to be plain html, and is displayed with gnc_html_show_url()
in its own window.
-# The displayed page is presumably some kind of login page. It is not
gnucash specific, and is entirely up to the webmaster or sysadmin
to provide, modify, etc. the login & authentication information.
The user types in name, passord, whatever.
-# The authentication mechanism issues a guid which will be used
to identify the session. The guid is placed in a cookie labelled
"gnc-server-sesion-guid=xxxxxxxxxxxxxxxxxxxxx"\n
Because a cookie can be snoopedand then used to steal a session,
the only secure way of doing this is to use SSL.
-# The cookie is used to identify the session to the gnc-server.
*/

View File

@ -1,193 +0,0 @@
/** \page plugindesign Plugin design proposal
Date: Mon, 19 Oct 1998 11:08:53 +0200\n
From: Stephan Lichtenauer <s_lichtenauer@muenchen.org>\n
To: linas@linas.org\n
Subject: [Fwd: ANNOUNCE version 1.1.20]
\section pluginoutline DESIGN PROPOSAL (ROUGH OUTLINE)
I thought that there is only one engine that manages runtime data, but
to store and read it it uses the i/o-plugins that handle the data in a very
abstract way, ie they know only as much about it as absolutely necessary so
changes of data representation only affects the engine and not the plugins (in
most cases). Nevertheless i would say that they are backend since they do not
need an ui. Necessary addresses/file names can be obtained in a standardized way
through the application. It could work with the CORBA persistence framework if I
remember right and the GNOME VFS.
\subsection pluginengine Engine
Split the existing engine in the following classes that match the existing data
structures:
- GncAccountGroup
- GncAccount
- GncSplit
- GncTransaction
- GncEngine
These five classes first of all simply use the design already used in the
engine. Additionally I would introduce a class
- GncRegistry
that is used to store some metadata that is used by e.g. plug ins or the user
interface. Since there is in my eyes need for two different classes of metadata,
global and account-group related one, I would give GncAccountGroup as well as
GncEngine a gncGetRegistry method that returns the local and the global registry
object. The registry can store its data in the account database and (global
data) in the config file in the user's root directory. An example for global
metadata of my plugin would be a database of all REUTERS codes of stocks
available, in the local registry (the account-group related one) the plug in can
save e.g. what codes have been selected to be updated etc. An ui could store
general options (e.g. user preferences) in the global registry and e.g. last
position of windows in the local registry.
GncEngine could as well be a metaclass since it only has to represent the engine
as such, with methods like gncReadAccountGroup.
GncSplit could be an abstract class whose functionality can be implemented
in derived concrete classes, e.g. for simple money transfers, buy of securities
(which will be further divided in stocks, futures, bonds) etc. Alternatively
additional data could be stored in an extended comment field with every split in
MIME format.Infrastructure for that is already partially implemented but is
(in my eyes) no perfectly clear OOP design.
One GncSplit subclass is GncComment that can store general data like company
statements. Since this could be data that affects more than one account (e.g.
general economical data), it is stored centralized and the GncComment object in
the different accounts only point on this data. GncSplit subclasses that
represent any type of securities will have to have additional fields, e.g. to
store maturities, volume (for special price quotes) etc.
In the design outline on the webpage a transparent mechanism to read data from
different sources is proposed. I would realize this completely with plug in
objects. So I would introduce
- GncPlugIn
to be the base class of all plug ins. How to make plugins independent from ui
toolkits (GTK, QT/KDE), I do not know since they will need an ui in many cases.
- GncIOPlugIn
is derived from it and will be the base class for all i/o plugins, e.g.
GncIOXacc, GncIOQif, GncIOSql, GncIOOnlineBanking, GncIOReadWww (yeah, my plugin!)
etc. Since some of the media is non-persistent, ie it is not sure if you will
get the data again in the future (e.g. from web pages), GncIOPlugIn has a method
gncIsPersistentMedia that returns FALSE in such cases. Then the data obtained
from this source can be copied to a persistent media (e.g. the SQL database).
An IO plugin has a limited lifespan and is only used to read/write data from/to
accounts/account groups/splits resp. create these objects as appropriate when
reading data.
One example:
You make a bank transfer in your account. The data is written to the
GncIOOnlineBanking object that uses the WWW-interface of your bank to give the
data to your bank. Then it reads the state of the transfer (via an online
request to your bank account balance) what will then appear in your account.
Possibly the GncIOOnlineBanking plugin is not persistent, i.e. you will not get a
full track of all of your transactions of the past via online banking, then the
data is stored locally (e.g. via GncIOPlugInXacc).
One account group so can use many IO plugins at the same to get its data.
Perhaps the IO plugins could be based on the GNOME VFS (virtual file system), if
this is not an unwanted dependency.
In this system of course there is one big problem: How to map the data obtained
from a source to the accounts. If you read it from your account group file of
course you have account ids and account name, but if you read from www stock
quotes or MetaStock data or online banking interfaces that is not that simple;
and it also has to work the other way round. So we need a
- GncDataMapper
class that has at least two methods gncFindAccounts and gncFindSources. Both get a
- GncMapInfo
struct as parameter that contains information on how to find the account or the
source. Both return a vector of GncMapInfos that contain information about
the matches. When you call gncFindAccounts you normally fill the fields with data
like the banking account number, REUTERS codes etc. and the method returns a
list of accounts that could match. If there is more or less then one account the
user could be prompted to help; the data obtained from a succesful match will be
stored in the registry by the GncDataMapper so it can be used in the future and
to do reverse mapping. Reverse mapping is what gcFindSources is for. There you
fill the GncMapInfo with the things like accountId, account name, comments etc.
and the GncDataMapper tries to find (with the help of its registry if there is
already some data available) some sources (e.g. quotes web pages, online banking
interfaces, company web pages etc.) and the IO plugins for this account. Again
user help could be involved the first time or later again if the user wants to
modify the matches. How to actually do the mapping is job of the GncDataMapper,
it could use regexp libraries etc. The most simple implementation would to be a
simple user query where the user has to find the matches.
Example:
When I have an account called "Silicon Graphics stocks" and I want to obtain
stock quotes from the web I have to find a web page where to get them from. When
I have a plug in for that I could deliver it with a database containing some
addresses for stock quotes for REUTERS-code "SGI" (or in Germany we have the
very easy-to-remember six-digit numbers, eg 872981 for SGI), this database will
be appended to the data mapper database. But now we have to find out that "SGI"
is what we are searching for, this is the mapper doing with eg regexp. He now
finds "SGI" beside some other sources, now the user could select the one(s) he
wants. The mapper stores the selection so he can do this automatically in the
future and also in reverse. If the quotes plugin has a cron-mechanism to update
some selected quotes every 1.5 seconds, the data mapper knows where the "SGI"
data goes to now. The same with online banking: map the accountId/-name on the
bank code/account number etc. if I have a remittance, and get accountId/-name
when there comes in a bank statement. So the mapper is a kind of "dynamic
linking" mechnism (with memory, so a little bit static, too, yes) to allow the
engine to search and parametrize the necessary plugins and to find out what to
do with the data they deliver.
A second type would be the
- GncReportPlugIn
that is used for report generation/data analyzing. A report interface that
generates HTML code (that can then be translated to SGML, PS, TeX etc. via an
additional tool) is proposed on the developers web page. So I would propose an
ui that is an HTML designer, very much like a wordprocessor or the Mathematica
frontend, and include automatically generated reports (tables for account
balances, graphs etc) with a reserved HTML keyword that also allows you to
specify some layout parameters like size and position. When the document finally
is generated, a parser calls for every of these keywords the appropriate report
plugin and replaces it with the output of the plugin (that has to be pure HTML
code with embedded images etc.). Chaining of report plug ins could be helpful,
e.g. first filter stock quotes from noise with a plugin before they are used in
a technical analysis tool.
Finding and linking the plugins to the engine could esaily be done via the CORBA
repository.
Report plugins first of all have to have a kind of
\verbatim
string gncMakeReport(string params);
\endverbatim
method that gets the parameters stored in the HTML command (e.g. accountId, data
range, graph type etc.), this has to be obtained through dialogs that are
maintained by the plugin itself, this is why I have said I do not know how to
make plugins toolkit-independent) and returns the generated HTML-code. They
have to have a second method to display this dialog and a
general plug-in mechanism that allows to find and load GnuCash corba plugins;
this of course is already introduced in the GncPlugIn class.
If plugins are chainable the gncMakeReport has to be modified/extended that they
get can get an account(-group) as input and return a new, modified
account(-group) that could be then input for a second plugin (eg the one that
finally creates the HTML-code). A usage for that could be a filter that eliminates
noise in stock quotes and so generates new quotes that are then used as input to a
Chaikin Oscillator plugin.
I hope it is in line with all the other proposals (e.g. scripting, budget engine
etc). I did not mention script languages in this document, I think it should be
possible to access the engine via CORBA bindings for all these languages and to
even create new classes or derive them, so writing plugins should be easily
possible with compiled languages as well as with interpreted ones.
Stephan Lichtenauer
Rassosiedlung 25
82284 Grafrath
s_lichtenauer@muenchen.org
*/

View File

@ -52,7 +52,6 @@ SCM_FILES = ${gncscm_DATA} ${gncscmmod_DATA}
EXTRA_DIST = \
build-config.scm.in \
config \
startup-design.txt \
${SCM_FILES}
## We borrow guile's convention and use @-...-@ as the substitution

View File

@ -1,23 +0,0 @@
/** \page schemestart Scheme startup process
\section current The startup process looks like this right now:
- gnucash is a hybrid /bin/sh and guile script which first execs
gnucash-env on itself to set up the proper environment and then
run the rest of the code in the gnucash file as a guile script.
- from the gnucash script itself, the (gnucash bootstrap) is loaded,
and then control transfers to the scheme function, main.
- the current module is set to (gnucash bootstrap) -- this is a hack
and should change later.
- gnc:main is called to finish starting up the application.
- parse the command line
- load the system config if we haven't already (there's a
command-line option to load the file earlier, --load-system-config)
- load the user's ~/gnucash/config.user if it exists, otherwise
load the user's ~/gnucash/config.auto if it exists.
config.auto is where we'll eventually spit out UI selected prefs.
*/
----- %< -------------------------------------------- >% ------
Rob Browning <rlb@cs.utexas.edu> PGP=E80E0D04F521A094 532B97F5D64E3930