Wednesday, March 26, 2014

SAP BO Universe and Report Migration Part 2

This is a continue post from my previous topic


Reports with Multiple Data Provider (Universe and External source) and Merged Dimension:-

·         If your report has merged dimension then it will disappear (if automatic merging is off) after the migration.
·         If your report has merged dimension then it will disappear and automatically re-created (if automatic merging is on) .This option is not applicable for external source, so if you had an merged dimension which includes object from universe and external source then it will not work, you have recreate it manually
·         Variables, breaks, filters… created based on merged dimension will be destroyed.

·         Variables which uses the merged dimension will get changed into some wired character.


Reports with Detail Variables associated dimension using merged Dimension:-

                You will receive any error message saying an error has occurred if you have any detail variable with associated dimension using merged dimension, the report will not migrate
                                                                                                                   



 Workaround:-
                   Edit the detail variable and change the associated dimensions to single data provider variable and do the conversion, after the successful conversion of the report, change the associated dimensions back to merged dimension.


External Source Data Provider issue:-
                                The external data provider internally keeps the reference of the universe. So you need to re point (simply refresh) to make a connection with the new universe else you will receive an error message




Result from another query :-
                                This option will through an error message, it better to remove the refresh during universe migration and re point it after successful migration


Report Variable using merged Dimension:-
                                As you have learned now, report variables will be destroyed if they uses merged dimension and it has to be recreated after successful migration.


Workaround :-
Before report migration edit the variable and make it dummy so that the report structure stays as it, update the variable after the migration.


Tuesday, March 25, 2014

How to setup a high speed data load from SSIS package to Netezza database

1.  On the SSIS OS install the NPS OLE DB driver by running:
nzoledbsetup.exe (for 32 bit systems)
OR
nzoledbsetup64.exe (for 64 bit systems). 


2.  In SSIS connection manager create connections to your data source(s) and target.  Use Netezza OLE DB provider when defining connection to Netezza.

3.  In SSIS define the data source as per normal (flat file, ODBC source, OLE DB source, etc...)

4.  In SSIS use the “OLE DB Destination” object to send data to the target, Netezza.

5.  In the OLE DB Destination object properties set the following parameter values so that SSIS invokes Netezza’s high speed loader properly.  This dialog box is shown in the screenshot below.
Set AccessMode                                           = OpenRowSet using FastLoad
Set AlwaysUseDefaultCodePage             = true

Set FastLoadMaxInsertCommitSize        = 0



SAP BO Universe and Report Migration Part 1

Migrate Universe from one database to another
·         First backup the original universe in case if something go wrong
·         Launch Designer, import universe from BOE. Import the universe
·         From file menu select Save As and save under a suitable name
·         From Tools menu select Connections and create a new connection pointing to new database server. Note, if may already have such a connection for another universe and if so we should reuse that
·         Once created we can close the Connections dialog
·         From File menu select Parameters.
·         Change connection to use new database connection
·         Update Description so that we can identify our updated universe, for example, increase version number or add Click OK to save changes
·         Select all tables, right click and select rename table
·         Update owner of tables to new database owner
·         Run an Integrity Check. The integrity check should flag all objects and conditions that fail parsing. It should also pick up that a table has changed.
·         Integrity Check once more and if all is well then we are done.
·         Finally export universe back to repository.

Migrating reports from Existing universe to new universe
If you want to switch the universe against new universe then you to do the following . 

·         Open the report
·         Select the ‘Data Access’ Tab
·         Select the ‘Tools’ sub tab
·         Select ‘Change Source’
·         Select the Query to change
·         And set ‘Specify new data source’
·         Click ‘Select data source’ from the drop down
·         Select ‘Universe’ and select the desired universe from the list.
·         If you have more than one query in your report then you may also want to tick the checkbox for ‘Apply changes in all queries sharing the same data source’
·         Click ‘Next’ and ‘Finish’.
·         The quer will then refresh based on the newly specified data sources.


Reports with Multiple Data Provider (Universe) and Merged Dimension:-

·         If your report has merged dimension then it will disappear (if automatic merging is off) after the migration.
·         If your report has merged dimension then it will disappear and automatically re-created (if automatic merging is on).
·         Variables, breaks, filters… created based on merged dimension will be destroyed.

·         Variables which uses the merged dimension will get changed into some wired character.


Reports with Detail Variables associated dimension using merged Dimension:-

                You will receive any error message saying an error has occurred if you have any detail variable with associated dimension using merged dimension, the report will not migrate



 Workaround:-
                   Edit the detail variable and change the associated dimensions to single data provider variable and do the conversion, after the successful conversion of the report, change the associated dimensions back to merged dimension.



An error has occurred error message:-
                                                  

                                This error message can occur during conversion if you have

·         Detail variables with associated dimension using merged dimension

·         Report condition objects has list of values associated with it


·         Query filter object using result from another query



Tuesday, March 18, 2014

Extracting SSIS Package Metadata

Retrieving the definitions of the SSIS Packages


SELECT  p.[name] as [PackageName]
,[description] as [PackageDescription]
,case [packagetype]
when 0 then 'Undefined'
when 1 then 'SQL Server Import and Export Wizard'
when 2 then 'DTS Designer in SQL Server 2000'
when 3 then 'SQL Server Replication'
when 5 then 'SSIS Designer'
when 6 then 'Maintenance Plan Designer or Wizard'
end as [PackageType]
,case [packageformat]
when 0 then 'SSIS 2005 version'
when 1 then 'SSIS 2008 version'
end as [PackageFormat]
,l.[name] as [Creator]
,p.[createdate]
,CAST(CAST(packagedata AS VARBINARY(MAX)) AS XML) PackageXML
FROM [msdb].[dbo].[sysssispackages] p
JOIN sys.syslogins l
ON p.[ownersid] = l.[sid]




Extracting connection strings from an SSIS Package





;WITH XMLNAMESPACES ('www.microsoft.com/SqlServer/Dts' AS pNS1,
  
'www.microsoft.com/SqlServer/Dts' AS DTS-- declare XML namespacesSELECT c.name,
  
SSIS_XML.value('./pNS1:Property [@pNS1:Name="DelayValidation"][1]'
    
'varchar(100)'AS DelayValidation,
  
SSIS_XML.value('./pNS1:Property[@pNS1:Name="ObjectName"][1]'
    
'varchar(100)'AS ObjectName,
  
SSIS_XML.value('./pNS1:Property[@pNS1:Name="Description"][1]'
    
'varchar(100)'AS Description,
   
SSIS_XML.value('pNS1:ObjectData[1]/pNS1:ConnectionManager[1]
/pNS1:Property[@pNS1:Name="Retain"][1]'
'varchar(MAX)'Retain,  
   
SSIS_XML.value('pNS1:ObjectData[1]/pNS1:ConnectionManager[1]
/pNS1:Property[@pNS1:Name="ConnectionString"][1]'
'varchar(MAX)'ConnectionStringFROM  --SELECT    id ,
                    
CAST(CAST(packagedata AS VARBINARY(MAX)) AS XMLPackageXML
          
FROM      [msdb].[dbo].[sysssispackages]
        
PackageXML
        
CROSS APPLY PackageXML.nodes('/DTS:Executable/DTS:ConnectionManager'SSIS_XML (SSIS_XML )
        
INNER JOIN [msdb].[dbo].[sysssispackages] c ON PackageXML.id c.id

Monday, March 17, 2014

Netezza Interview Questions - 2

1. What are the partitioning methods available in netezza?
There are two partitioning methods available in netezza:
  • Random partitioning: Distributes the data randomly.
  • Hash Partitioning: Distributes the data on the specified columns.
2. Up to how many columns you can specify in distribute on clause?
You can specify up to four columns in the distribution clause?

3.If you did not specify any distribute on clause while creating a table, what distribution netezza uses?
Netezza distributes the data on the first column and it uses Hash partitioning

4. Can you update the columns used in distribution clause?
No, the column that is used in distribution clause cannot be used for updates.

5. What data types are most suited for the columns specified in distribution clause?
Integer

6. How do you redistribute a table?
Use Create Table As (CTAS) to redistribute the data in a table. While creating the new table specify the distribute on clause to distribute the data on the new columns.

7. If you did not specify any distribution clause, how the Create Table AS (CTAS) will distribute the rows?
CTAS will get distribution from the original table.

8.How do you check the rows in a table are equally distributed in all SPU's or not?
To check the distribution of rows run the following query
SELECT datasliceid, COUNT(*)FROM <table name> GROUP BY datasliceid

9. What is collocated join?
When you join tables which are distributed on the same key and used these key columns in the join condition, then each SPU in netezza works 100% independent of the other, as the required data is available in itself. This type of joins is called collocated joins.

10. When netezza redistributes a table and when it broadcasts a table?
Whenever it is not possible to do a collocated join, netezza either redistributes the tables or broadcasts the table. When the table is a small one, then netezza broadcasts the table. Otherwise netezza redistributes the table.

11. How do you remove logically deleted records?
Whenever you delete a row in a table, it is not physically deleted. It is logically deleted by flagging the deletexid field in the table. NZRECLAIM utility is used to remove the logically deleted records. 

12. What is nzload?
Nzload utility is used load data from a file into a table. It is used to load bulk data quickly and simultaneously rejects erroneous content.

13. What are the ways to load a data from a table into a file?
Create an external table.
Use nzsql utility with -o option.

14. What are the different ways to load data from a file into a table?
Use nzload to load the data from a file into a table 
Create an external table and then load the original table using the external table.

15. How netezza updates a row in a table?
Netezza logically deletes the original row by flagging the deletexid column with the current transaction id and inserts a new row with the updated values.