Friday, 25 July 2014

What is Informatica and its Basic Interview Questions | Quontra Solutions

 Introduction to Informatica

Informatica is a tool, supporting all the steps of Extraction, Transformation and Load process. Now a days Informatica is also being used as an Integration tool.
Informatica is an easy to use tool. It has got a simple visual interface like forms in visual basic. You just need to drag and drop different objects (known as transformations) and design process flow for Data extraction transformation and load. These process flow diagrams are known as mappings. Once a mapping is made, it can be scheduled to run as and when required. In the background Informatica server takes care of fetching data from source, transforming it, & loading it to the target systems/databases.
Important Components of Informatica are:
Power Exchange
Power Centre
Power Centre Connect
Power Channel
Metadata Exchange
Power Analyser
Super Glue
Components of Informatica Client
Repository Manager : Manage the Informatica repository from the client machine
Designer: The Designer helps you create source definitions, target definitions, and
transformations to build a mapping
Workflow Manager : Allows user to execute a mapping created in the designer
Workflow Monitor : Tool that allows you to monitor workflows and tasks

                                                  Basic Interview Questions On Informatica:

What is Data Integration?
Data Integration is the process of combining data residing at different sources and providing the user with a unified view of these data.
Explain PowerCenter Repository?
Repository consist of database tables that store metadata. Metadata describes different types of objects , such as mappings or transformations , that you can create using PowerCenter Client tools. The interation service uses repository objects to extract , transform and load data. The repository also stores administrative information such as user names, passwords , permissions and previleges. When any task is performed through PowerCenter Client application such as creating users, analyzing sources , developing mapping or mapplets or creating workflows , Metadata is added to repository tables.
What is a Mapping?
A mapping is a set of source and target definitions linked by transformation objects that define the rules for data transformation. Mappings represent the data flow between sources and targets. When the Integration Service runs a session, it uses the instructions configured in the mapping to read, transform, and write data.
What is a mapplet?
A mapplet is a reusable object that contains a set of transformations and enables to reuse that transformation logic in multiple mappings.
While importing the relational source definition from database, what are the metadata of source you import?
Source name
Database location
Column names
Data types
Key constraints
How many ways you can update a relational source definition and what are they?
Two ways
1. Edit the definition
2. Re import the definition
Where should U place the flat file to import the flat file definition to the designer?
Place it in local folder
To provide support for Mainframes source data, which files r used as a source definitions?
COBOL files
Which transformation should u need while using the COBOL sources as source definitions?
Normalizer transformation is used to normalize the data. Since COBOL sources r often consists of Demoralized data.

Flat file header row, footer row and detail rows to multiple tables | Quontra Solutions

Assume that we have a flat file with header row, footer row and detail rows. Now Lets see how to load header row into one table, footer row into other table and detail rows into another table just by using the transformations only.
First pass the data from source qualifier to an expression transformation. In the expression transformation assign unique number to each row (assume exp_count port). After that pass the data from expression to aggregator. In the aggregator transformation don’t check any group by port. So that the aggregator will provide last row as the default output (assume agg_count port).
Now pass the data from expression and aggregator to joiner transformation. In the joiner select the ports from aggregator as master and the ports from expression as details. Give the join condition on the count ports and select the join type as master outer join. Pass the joiner output to a router transformation and create two groups in the router. For the first group give the condtion as exp_count=1, which gives header row. For the second group give the condition as exp_count=agg_count, which gives the footer row. The default group will give the detail rows.

Normalizer Transformation Error – Informatica

Normalizer transformation is used to convert the data in multiple columns into different rows. Basically the normalizer transformation converts the denormalized data in a table in to a normalized table.
Normalizer Transformation Error
Getting the following Error for the Normalizer transformation in mapping when pivoting the columns in to Rows
TT_11054 Normalizer Transformation: Initialization Error: [Cannot match OFOSid with IFOTid.]
How to fix the Normalizer Transformation Error?
Solution:
Follow the below steps to avoid this error.
  1. There should be no unconnected input ports to the Normalizer transformation.
  2. If the Normalizer has an OCCURS in it, make sure number of input ports matches the number of OCCURS.

Creating a Non Reusable Object from Reusable Object | Quontra Solutions

Q) How to create a non-reusable transformation or session or task from a reusable transformation or session or task?
I still remember my first project in which i created so many reusable transformations and developed a mapping. My project lead reviewed the code and told me that you created unnecessary reusable transformation change them to non reusable transformations. I created non reusable transformations and re-implemented the entire logic. It took almost one day for me to implement the code. Still so many new informatica developers will do the same mistake and re implement the entire logic.
I found an easy way to create a non-reusable transformation from a reusable transformation. Follow the below steps to create a non-reusable transformation or session or task from a reusable transformation or session or task in informatica is
  1. Go to the Navigator which is on the left side.
  2. Select the reusable transformation or session or task which you want to convert to non resuable with the mouse.
  3. Drag the object (transformation/session/task) to the work-space and just before leaving the object on the work-space hold the ctrl key and then release the object.
Now you are done with creating a non-reusable transformation or session or task.

Informatica Performance Improvement Tips | Quontra Solutions

Following standards/guidelines can improve the overall performance:
  • Use Source Qualifier if the Source tables reside in the same schema.
  • Make use of Source Qualifer  “Filter” Properties if the Source type is Relational.
  • If the subsequent sessions are doing lookup on the same table, use persistent cache in the first session. Data remains in the Cache and available for the subsequent session for usage.
  • Use flags as integer, as the integer comparison is faster than the string comparison.
  • Use tables with lesser number of records as master table for joins.
  • While reading from Flat files, define the appropriate data type instead of reading as String and converting.
  • Have all Ports that are required connected to Subsequent Transformations else check whether we can remove these ports.
  • Suppress ORDER BY using the ‘–’ at the end of the query in Lookup Transformations.
  • Minimize the number of Update strategies.
  • Group by simple columns in transformations like Aggregate, Source Qualifier.
  • Use Router transformation in place of multiple Filter transformations.
  • Turn off the Verbose Logging while moving the workflows to Production environment.
  • For large volume of data drop index before loading and recreate indexes after load.
  • For large of volume of records Use Bulk load Increase the commit interval to a higher value large volume of data.
  • Set ‘Commit on Target’ in the sessions.

Case Converter Transformation in informatica | Quontra Solutions

The Case converter transformation is a passive transformation used to format the data to similar character formats. Case converter transformation is used to maintain data quality.
The predefined case conversion types are uppercase, lowercase, toggle case, title case and sentence case.
Reference tables can also be used to control the case conversion. Use the “Valid” column in the reference table to change the case of input strings. Use reference tables only when the case conversion type is “title case or sentence case”.
Case Strategy Properties:
You can create multiple case conversion strategies. Each strategy uses a single conversion type. Configure the following properties on the strategies view in the case converter transformation:
Reference Tables: used to apply the capitalization format specified by a reference table. Reference tables work only if the case conversion option is title case or sentence case. If a reference table match occurs at the start of a string, the next character in that string changes to uppercase. For example, if the input string is vieditor and the reference table has an entry for Vi, the output string is ViEditor.
Conversion Types: The conversion types are uppercase, lowercase, toggle case, title case and sentence case. The default conversion type is uppercase.
Leave uppercase words unchanged: Overrides the chosen capitalization for uppercase strings.
Delimiters: Specifies how capitalization functions work for title case conversion. For example, choose a colon as a delimiter to transform “james:bond” to “James:Bond”. The default delimiter is the space character.

SQL Transformation in Informatica | Quontra Solutions

SQL Transformation is a connected transformation used to process SQL queries in the midstream of a pipeline. We can insert, update, delete and retrieve rows from the database at run time using the SQL transformation.
The SQL transformation processes external SQL scripts or SQL queries created in the SQL editor. You can also pass the database connection information to the SQL transformation as an input data at run time.
The following SQL statements can be used in the SQL transformation.
  • Data Definition Statements (CREATE, ALTER, DROP, TRUNCATE, RENAME)
  • DATA MANIPULATION statements (INSERT, UPDATE, DELETE, MERGE)
  • DATA Retrieval Statement (SELECT)
  • DATA Control Language Statements (GRANT, REVOKE)
  • Transaction Control Statements (COMMIT, ROLLBACK)

Configuring SQL Transformation

The following options can be used to configure an SQL transformation
  • Mode: SQL transformation runs either in script mode or query mode.
  • Active/Passive: By default, SQL transformation is an active transformation. You can configure it as passive transformation.
  • Database Type: The type of database that the SQL transformation connects to.
  • Connection type: You can pass database connection information or you can use a connection object.
We will see how to create an SQL transformation in script mode, query mode and passing the dynamic database connection with examples.

Creating SQL Transformation in Query Mode

Query Mode: The SQL transformation executes a query that defined in the query editor. You can pass parameters to the query to define dynamic queries. The SQL transformation can output multiple rows when the query has a select statement. In query mode, the SQL transformation acts as an active transformation.
You can create the following types of SQL queries
Static SQL query: The SQL query statement does not change, however you can pass parameters to the sql query. The integration service runs the query once and runs the same query for all the input rows.
Dynamic SQL query: The SQL query statement and the data can change. The integration service prepares the query for each input row and then runs the query.