The user-friendly capabilities and easy to learn interface of the Microsoft SQL, makes it to be one of the most widely known database management system (DBMS) in this field. Nevertheless, the system has two significant drawbacks that could sometimes suggest users to lookout for a replacement for the DBMS. This disadvantages includes:
- Strict licensing policies
- High cost of license, especially for large databases
When choosing the target system for database migration, it is reasonable to consider open-source databases as it can cut-down the total cost of ownership. In this category, you’ll find three leading open-source databases such as:
SQLite can be referred to as a document-based database and a self-contained database system. It is a good choice to integrate database into an application, but it cannot be used in the multi-user environment for large databases.
The MySQL, on the contrary, is more robust. It offers features that are typically of a sophisticated RDBMS. These functions consist of: scalability, security, and various storage units for various functions. A few of its disadvantages consist of,
- No assistance for full text search
- Does not carry out the full SQL standard
- Inferior support for parallel writes in certain database engines.
The PostgreSQL is the normal RDBMS that comes with a relational feature as well as abuilt-in object-oriented database functionality. This is one of the features that makes it the best option when it comes to data integrity with high level of reliability.
To effectively move database from SQL Server to PostgreSQL, the following need to be done:
- export MS SQL table definitions
- convert them to the PostgreSQL format
- load the results to a PostgreSQL server
- export SQL Server data into CSV files
- convert data into the PostgreSQL format
- load into the target database.
Table definitions can be extract from SQL Server database using one of these options depending on DBMS version:
- For the Microsoft SQL versions before 2012: right-click on database in Management Studio, then select Tasks > Generate Scripts menu item. Make sure that “data” option is not selected, which is default.
- For Microsoft SQL 2012 and later versions: in Management Studio highlight the database, then right click it and select Tasks > Generate Scripts menu item. Then uncheck the “data” option on the “Set scripting options” tab.
Before you proceed to the next step, update the resulting SQL script using the check list below:
- remove SQL Server specific statements having no equivalents in PostgreSQL
- change square brackets around database object names by double quotes
- take out square brackets around types
- replace name of default SQL Server schema “dbo” by PostgreSQL equivalent “public”
- take out all optional keywords that are not supported by the target DBMS (i.e. “WITH NOCHECK”, “CLUSTERED”)
- take out all reference to filegroup (i.e. “ON PRIMARY”)
- change types “INT IDENTITY(…)” by “SERIAL”
- update all non-supported data types (i.e. “DATETIME” becomes “TIMESTAMP”, “MONEY” becomes NUMERIC(19,4))
- change the MS SQL query terminator “GO” with the PostgreSQL one “;”
The next move is to process the data, which can be accomplished with the use of the MS SQL Management Studio.
- Right-click on database and select Tasks > Export Data menu item
- Follow the well explained steps of the wizard and select “Microsoft OLE DB Provider for SQL Server” as data source, and “Flat File Destination” as destination.
As soon as the export is finalized, the exported data will exists in the destination file in the comma-separated values (CSV) format.
Workaround must be employed if some of the tables contain binary data. To achieve this, click on the “Write a query to specify the data to transfer” option after going through the wizard page. This wizard page is also called the “Specify Table Copy or Query”. On the “Provide a Source Query” wizard page enter this query:
select non-binary-field1, non-binary-field2, cast( master.sys.fn_varbintohexstr( cast( binary-field-name as varbinary(max))) as varchar(max)) as binary-field-name from table-name
The query may run into an infinite hang as this method is not suitable for heavy binary data, let’s say 1MB and above.
Use PostgreSQL “COPY” command to load resulting CSV files into the target tables as follows:
COPY <table name> FROM <path to csv file> DELIMITER ‘,’ CSV;
Try the “\COPY” command if you receive a “Permission denied” error message with the “COPY” command.
The series of actions mentioned above implies that the database migration requires a whole lot of effort and most times, it is often an advanced procedure.
Manual conversions are expensive, time-consuming, and may often result to data loss or corruption which could lead to inaccurate results. You can find modern-day solutions, which can transform and migrate data between two DBMS in a couple of clicks. One of these solutions is SQL Server to PostgreSQL migration tool by Intelligent Converters, a software vendor, who specializes in database conversion and synchronization techniques. They have been in this business since 2001.
With direct connection to both the source and target databases, the tool offers a high performance conversion that doesn’t require ODBC drivers or other middleware elements. It also enables scripting, automation and scheduling of conversions.