PDA

View Full Version : QSqlQuery::executedQuery () -- Bound values versus place holders



davethomaspilot
16th January 2014, 15:05
After doing an insert to a local MySql database, I want to log the SQL text to a file and use it later for inserting to a different ( remote) database when a connection is available.

The docs for QSqlQuery::executedQuery () say:

The placeholders in the original query are replaced with their bound values to form a new query.

But, for an insert, I see "?" for all the values :


"INSERT INTO TEAM_RECORDS (event_id, race, heat, today, left_team_time, left_start_time, left_result, right_team_time, right_start_time, right_result) VALUES ( ?,?,?,?,?,?,?,?,?,?)"


I also tried QSqlQuery::lastQuery, but the Sql text has the placeholders, not the values:


"INSERT INTO TEAM_RECORDS (event_id, race, heat, today, left_team_time, left_start_time, left_result, right_team_time, right_start_time, right_result) VALUES ( :event_id,:race,:heat,:today,:left_team_time,:left _start_time,:left_result,:right_team_time,:right_s tart_time,:right_result)"


Is this the intended behavior?

I can write a little code to parse the SQL returned by ::lastQuery and replace the placeholders with values from the bindValues, but thought I'd ask if I might be missing something already in the class.

ChrisW67
17th January 2014, 06:17
This "placeholders in the original query are replaced with their bound values to form a new query." only occurs if "a prepared query with placeholders is executed on a DBMS that does not support it". Otherwise, as the docs say, this generally returns the same as lastQuery() because Qt never builds a query string with the values replaced in that case.


I can write a little code to parse the SQL returned by ::lastQuery and replace the placeholders with values from the bindValues, but thought I'd ask if I might be missing something already in the class.
Why would you want to do that? You can certainly make a much better error or log message than to simply dump the equivalent raw SQL (which you will have to be careful to escape correctly).

davethomaspilot
17th January 2014, 17:34
I don't want a human readable message. I wanted the actual Sql command with values substituted so it could be used directly to insert into another database.

I just wrote the code to build use the SQL for the "prepare" and the key/value pairs from the bind values. Encapsulated in xml and then parse in remote server that uses it.

It all works--I just thought it would be simpler to have SQL with the values substituted rather than the additional complexity.

Thanks,

Dave Thomas

wysota
18th January 2014, 02:20
I think it would make more sense to use some standard replication scheme.

E.g.: http://dev.mysql.com/doc/refman/5.0/en/replication.html

davethomaspilot
18th January 2014, 15:44
Thanks for the tip!

I'd seen the chapter on Replication in the MySql manual and thought it might be relevant, was going to read it, but I've been too busy coding! :)

Since you suggest replication might be a better way to go, I pasted a question below I posted on another forum a few days ago. The question gives more detail about what I'm trying to do. I got only one very high level reply.

One complication is that my web-hosting service does not allow remote connections to a database hosted on their servers. I considered moving to a service that does, but I'm thinking that might not be an uncommon restriction. So, rather than make having the ability to database connect remotely a requirement, I wrote some php that runs on the remote server and issues the queries. The php gets the queries via http posts which provide xml that encapsulates database credentials and the sql commands to be issued.

That's what I've just gotten working, but if there's a more typical way of doing what I want, I'm all about that!

I'll study the replication alternative--thanks!

Dave

Post to other forum:

------------------------------------------------------------------------------------------------

Currently, the database is local -- on the machine running the application. I'm now scaling up to have multiple instances of the application running on several machines.

It's necessary to get all the data from the various clients into a single database that's used by an Apache/MySql/PHP setup that builds and updates web pages when the application does Sql inserts and deletes. It's desirable to have the data in a single database immediately after applications do SQL transactions (call it "day of"), but that's not absolutely required.

The venues where the application will be run don't always have internet connectivity. And, when they do have connectivity it's often unreliable. Maybe only cell phone access and with weak signal.

So, I'm trying to devise a strategy that will always allow the application to run and save data locally, regardless of whether an external connection exists. If a connection doesn't exist, local data could be sent to the remote server(s) at a later time when a reliable connection is available.

I see three scenarios:

1) No connectivity of any kind "day of". The application should do inserts to the local database. Data is uploaded to the web at a later time.

2) Connectivity via wireless Lan is available, but that network isn't connected to the internet. The wireless LAN could go down but the application still needs to keep working. Data is uploaded to a common server later when an internet connection is available.

3) A connection to the internet is available. But, like the wireless LAN I don't want it to be a dependency for using the application "day of".

Seems like this is probably a common situation, so I thought I'd ask for input on the best way to do it. Here's what I'm thinking so far:

1) Make sure inserts that use an auto-incremented id from a prior insert are included in a transaction with those inserts.
2) Check for success of every transaction--on fail, save the SQL text to a file for later use when server connectivity, becomes available.

So, I'm planning on having connections to up to three databases. A local one, one on a local network, and one on the internet. The latter two will have associated files that will have the SQL commands for all the transactions that didn't succeed.

Thanks for the tip!

I'd seen the chapter on Replication in the MySql manual and thought it might be relevant, was going to read it, but I've been too busy coding! :)

Since you suggest replication might be a better way to go, I pasted a question below I posted on another forum a few days ago. The question gives more detail about what I'm trying to do. I got only one very high level reply.

One complication is that my web-hosting service does not allow remote connections to a database hosted on their servers. I considered moving to a service that does, but I'm thinking that might not be an uncommon restriction. So, rather than make having the ability to database connect remotely a requirement, I wrote some php that runs on the remote server and issues the queries. The php gets the queries via http posts which provide xml that encapsulates database credentials and the sql commands to be issued.

That's what I've just gotten working, but if there's a more typical way of doing what I want, I'm all about that!

I'll study the replication alternative--thanks!

Dave

Post to other forum:

------------------------------------------------------------------------------------------------

Currently, the database is local -- on the machine running the application. I'm now scaling up to have multiple instances of the application running on several machines.

It's necessary to get all the data from the various clients into a single database that's used by an Apache/MySql/PHP setup that builds and updates web pages when the application does Sql inserts and deletes. It's desirable to have the data in a single database immediately after applications do SQL transactions (call it "day of"), but that's not absolutely required.

The venues where the application will be run don't always have internet connectivity. And, when they do have connectivity it's often unreliable. Maybe only cell phone access and with weak signal.

So, I'm trying to devise a strategy that will always allow the application to run and save data locally, regardless of whether an external connection exists. If a connection doesn't exist, local data could be sent to the remote server(s) at a later time when a reliable connection is available.

I see three scenarios:

1) No connectivity of any kind "day of". The application should do inserts to the local database. Data is uploaded to the web at a later time.

2) Connectivity via wireless Lan is available, but that network isn't connected to the internet. The wireless LAN could go down but the application still needs to keep working. Data is uploaded to a common server later when an internet connection is available.

3) A connection to the internet is available. But, like the wireless LAN I don't want it to be a dependency for using the application "day of".

Seems like this is probably a common situation, so I thought I'd ask for input on the best way to do it. Here's what I'm thinking so far:

1) Make sure inserts that use an auto-incremented id from a prior insert are included in a transaction with those inserts.
2) Check for success of every transaction--on fail, save the SQL text to a file for later use when server connectivity, becomes available.

So, I'm planning on having connections to up to three databases. A local one, one on a local network, and one on the internet. The latter two will have associated files that will have the SQL commands for all the transactions that didn't succeed.

Added after 1 47 minutes:

Getting off topic, so no problem if I should "cease and desist", but to follow up on Wysota's comment....

I spent about an hour reading about MySql replication--so I don't know much. But, besides seeming really complex, I don't think it fits my usage scenario very well:


one-way, asynchronous replication, in which one server acts as the master, while one or more other servers act as slaves.

I need several platforms, each running my application doing writes to their own database when there's no connectivity. I think that means that each platform would need to have both a master and a slave server, since only the master writes to the database, then propagates changes to the slaves?

Then, when connectivity is established, all but one connected masters become slaves. The one master does the database changes and then replicates to all the slaves.

Have I got it right? Is this a common implementation?

My application is relatively simple. Inserts would occur at a rate of a few (<5) every few minutes. Multiple clients (<10) are likely to do their inserts nearly simultaneously, since they are in response to events simultaneously seen by the clients (a race ends).

It's not a lot of data either. Just a "burst" of few Kbytes at a frequency slower than a burst per minute.

I just want to get the data from the separate databases into a single database, as quickly as connectivity permits, with minimal interaction with the application user

Thanks,

Dave Thomas

davethomaspilot
21st January 2014, 13:12
This is definitely the way to go. Use MySql replication rather than "roll your own".

I needed to read the docs a couple of times to get comfortable with the configuration process. It sounded much more complicated than what it really is.

Thanks for the good advice!

Works great on servers that I can admin. Now, I either need to change web hosting services to get one that will allow me to configure a mysql server, or figure out to encapsulate the equivalent of the bin-log files into something I can do an http post with.

Thanks!

Dave Thomas