Now the code is more readable, but the upload is at least 150 times slower… I have came up with the following solution: import sqlalchemy as saĮngine = sa.create_engine("mssql+pyodbc:///?odbc_connect=%s" % cnxn_str)ĭata_frame.to_sql(table_name, engine, index=False) I then started to wonder if things can be sped up (or at least more readable) by using data_frame.to_sql() method. It goes something like this:Ĭursor.executemany(sql_statement, list_of_tuples) ![]() ![]() The way I do it now is by converting a data_frame object to a list of tuples and then send it away with pyODBC’s executemany() function. I would like to send a large pandas.DataFrame to a remote server running MS SQL. Question or problem about Python programming:
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |