Wednesday, 16 October 2013
UPDATE: All keys have now been sent to the winners. Thanks for all that contributed.
Monday, 27 May 2013
ora2pg is a tool that has been around a while now and with that I knew that a lot of man hours had gone into the development so I was keen to take a look.
Installing ora2pg is faily simple, all you require is, perl dbi, perl DBD::Oracle, perl DBD::Pg and an Oracle Instant Client.
- Oracle Instant Client (requires OTN login) - http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html - (grab sqlplus, basic and sdk binaries).
- Perl DBD Oracle - http://search.cpan.org/~pythian/DBD-Oracle-1.64/lib/DBD/Oracle.pm
- Perl DBD PostgreSQL - http://search.cpan.org/~turnstep/DBD-Pg-2.19.3/Pg.pm
- Perl DBI - http://dbi.perl.org/
Finally, the most important download is ora2pg:
Upload all relevant files and install:
For DBI and DBD Modules
tar zxvf filename.tar.gz
For SQL Client (file names may differ)
mv instantclient-basic_11_2.zip instantclient_11_2-sqlplus.zip instantclient_11_2-sdk.zip /instantclient
save and exit.
tar zxvf filename.tar.gz
ora2pg can be configured in many ways so the best thing to do is to have a look at the configuration file in /etc/ora2pg. You can write data output to a file, or straight into a PostgreSQL database.
The one thing I like about this tool is the client encoding detection, thus most data just goes straight from source to target without any issues.
Simply type: ora2pg... This reads in your config file and connects as appropriate. The one thing I would recommend is setting DEBUG=1 so you get the output to screen and see how far along your data migration is!
- ensure archive_mode is turned off by editing the postgresql.conf, set archive_mode = off and hash out your archive_command. Then restart your database.
- disable any triggers on underlying objects
- use the pg_restore -j option for example: pg_restore -d testdatabase -j4 -v /backup/live.dump - This allows parallelism and -j should be set to the number of cores that you would like to allocated to the pg_restore job.
Monday, 19 November 2012
Sunday, 28 October 2012
First of all NoSQL Databases are not attempting to replace RDBMS out-and-out, that is a fact. They tend to work alongside relational databases, to fulfill requirements that are better suited for a more dynamic schema. You may have a certain module within your application that is simply a large magnitude of reads, NoSQL is good for that. You may have an application within your application stack which has grown at such a quick rate, that it is no longer feasible to scale vertically. NoSQL is good for that (sharding). You may want to take advantage of one NoSQL's fancy features, such as map reduce or geospatial indexing. Hey, it's good for that too!
If you have a spare hour, take a look at: http://www.slideshare.net/tackers/why-we-chose-mongodb-for-guardiancouk . This is an interesting adventure of how "The Guardian" have recently changed a few parts of their RDBMS infrastructure and replaced this with MongoDB. They seem to be going through phases where they take bits out at a time as per requirements with a bunch of internal written API's that cover the front line of requests.
I can see more tech companies going down this route in the not so distant future. I'll be following the progress all of the way.
Thursday, 14 June 2012
Pros: Helpful examples, Concise, Well-written
Best Uses: Intermediate, Novice
Describe Yourself: Developer, Designer, Sys Admin
With big data systems becoming the standard for our industry today, it was only a matter of time before these two products were married together.
The book is nicely written with a concise statement on each subject. The examples are just enough to satisfy and leaves the reader to explore further which is my personal preference. A perfect read for someone that is starting out with big data and NoSQL databases in mind.
Sunday, 29 April 2012
I kind of see what they're suggesting and agree that whilst there might not be so much "unique" software etc being produced in the future, data is growing at an all time rate and thus technological advances will always be required.
Once a man has all of his tools aka apps to accomplish the final task, I don't feel it stops there. There will always be companies and innovative individuals that will foresee the direction in the way the apps develop, providing improvements and likely infringing patents along the way.
Secondly the data generated by all of this software is growing at rates we've never seen before. The innovation of more flexible architecture will be introduced as we're already seeing with Web 3.0. HTML5, jQuery and NoSQL are some of the areas that are becoming ever popular in the future of development and data storage.
Although I certainly disagree that we may see a drop in solitary software for the foreseeable future, we will continue to see growth in processing power, data storage and other exclusive ways of dealing with the mechanics that most end users ever see or care about.