Wednesday, 16 July 2014

Solution Architecture

Solution architecture (within or without enterprise architecture) is a combination of role, process and documentation that is intended to address specific problems and requirements, usually through the design of specific information systems or applications.
The term solution architecture can be used to mean either or both:
  • Documentation describing the structure and behaviour of a solution to a problem, or
  • A process for describing a solution and the work to deliver it.
The documentation is typically divided into broad views, each known as an architecture domain.
Where the solution architect starts and stops work depends on the funding model for the process of solution identification and delivery. E.g. An enterprise may employ a solution architect on a feasibility study, or to prepare a solution vision or solution outline for an Invitation to Tender. A systems integrator may employ a solution architect at “bid time”, before any implementation project is costed and resourced. Both may employ a solution architect to govern an implementation project, or play a leading role within it.
Typical outcomes of solution architecture.
Solution architects typically produce solution outlines and migration paths that show the evolution of a system from baseline state to target state.
A solution architect is often but not always responsible for design to ensure that the target applications, in a technical architecture, will meet non-functional requirements.
Solution architecture often but not always leads to software architecture work[1] and technical architecture work, and often contains elements of those.
A solution architecture may be described in a document at the level of a solution vision or a more detailed solution outline. It typically specifies a system (itself usually a subsystem in a wider enterprise system) that is intended to solve a specific problem and/or meet a given set of requirements. It may be an IT system to support a single business role or process. For example, an end-to-end eCommerce system that allows customers to place orders for goods and services; or an end-to-end Supply Replenishment system that enables an enterprise to order new stock from its suppliers.
A solution outline typically defines the business context, business data to be created or used, the application components needed, the technology platform components needed, along with whatever is needed to meet non-functional requirements (speed, throughput, availability, reliability recoverability, integrity, security, scalability, service ability, etc.).
The term solution architecture is widely used outside of an enterprise architecture context. It is also used in some enterprise architecture (EA) frameworks, with particular meanings. In TOGAF it can mean the physical implementation of a logical architecture, or a detailed software architecture. In US government guidelines, it is pitched at the bottom level of a stack below "enterprise" and "segment" architectures, as shown in the diagram below.
Relationship of solution architecture to enterprise architecture in FEA. 2006 FEA Practice guidance of US OMB showing the relationship of EA and Solution Architecture
In other contexts, a wide range of stakeholders, even business owners, may be concerned to review a solution vision or solution outline and monitor progress towards implementation.
Generally speaking, an enterprise architect’s deliverables are more abstract than a solution architect’s deliverables. But that is not always the case. The main distinction between enterprise architect and solution architect lies in their different motivations.
The solutions architect is primarily employed to help and support programme and project managers in the design, planning and direction of specific implementation projects. The enterprise architect has more strategic and cross-organisational concerns, and strives to optimize solution delivery across the organization.
The work of a solutions architect may or may not be governed by an enterprise architecture function. The influence of the enterprise architect team on solution architects depends on an organisation’s policies and management structure. So, the extent to which a solution architect’s work realises an enterprise architect’s road maps will vary widely in different contexts.

ETL

In computingextract, transform, and load (ETL) refers to a process in database usage and especially in data warehousing that:
ETL systems are commonly used to integrate data from multiple applications, typically developed and supported by different vendors or hosted on separate computer hardware. The disparate systems containing the original data are frequently managed and operated by different employees. For example a cost accounting system may combine data from payroll, sales and purchasing.

Extract

The first part of an ETL process involves extracting the data from the source systems. In many cases this is the most challenging aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes.
ETL Architecture Pattern
Most data warehousing projects consolidate data from different source systems. Each separate system may also use a different data organization and/or format. Common data source formats are relational databases and flat files, but may include non-relational database structures such as Information Management System (IMS) or other data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM), or even fetching from outside sources such as through web spidering or screen-scraping. The streaming of the extracted data source and load on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required. In general, the goal of the extraction phase is to convert the data into a single format appropriate for transformation processing.
An intrinsic part of the extraction involves the parsing of extracted data, resulting in a check of whether the data meet expected patterns or structures. If not, the data may be rejected entirely or in part.

Transform

The transform stage applies a series of rules or functions to the extracted data from the source to derive the data for loading into the end target. Some data do not require any transformation at all. In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the server or data warehouse:
  • Selecting only certain columns to load (or selecting null columns not to load). For example, if the source data has three columns (also called attributes), roll_no, age, and salary, then the selection may take only roll_no and salary. Similarly, the selection mechanism may ignore all those records where salary is not present (salary = null).
  • Translating coded values (e.g., if the source system stores 1 for male and 2 for female, but the warehouse stores M for male and F for female)
  • Encoding free-form values (e.g., mapping "Male" to "M")
  • Deriving a new calculated value (e.g., sale_amount = qty * unit_price)
  • Sorting
  • Joining data from multiple sources (e.g., lookup, merge) and deduplicating the data
  • Aggregation (for example, rollup — summarizing multiple rows of data — total sales for each store, and for each region, etc.)
  • Generating surrogate-key values
  • Transposing or pivoting (turning multiple columns into multiple rows or vice versa)
  • Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns)
  • Disaggregation of repeating columns into a separate detail table (e.g., moving a series of addresses in one record into single addresses in a set of records in a linked address table)
  • Lookup and validate the relevant data from tables or referential files for slowly changing dimensions.
  • Applying any form of simple or complex data validation. If validation fails, it may result in a full, partial or no rejection of the data, and thus none, some or all the data are handed over to the next step, depending on the rule design and exception handling. Many of the above transformations may result in exceptions, for example, when a code translation parses an unknown code in the extracted data.

Load

The load phase loads the data into the end target, usually the data warehouse (DW). Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information; updating extracted data is frequently done on a daily, weekly, or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in an historical form at regular intervals—for example, hourly. To understand this, consider a data warehouse that is required to maintain sales records of the last year. This data warehouse overwrites any data older than a year with newer data. However, the entry of data for any one year window is made in a historical manner. The timing and scope to replace or append are strategic design choices dependent on the time available and the business needs. More complex systems can maintain a history and audit trail of all changes to the data loaded in the data warehouse.
As the load phase interacts with a database, the constraints defined in the database schema — as well as in triggers activated upon data load — apply (for example, uniqueness, referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process.
  • For example, a financial institution might have information on a customer in several departments and each department might have that customer's information listed in a different way. The membership department might list the customer by name, whereas the accounting department might list the customer by number. ETL can bundle all of these data and consolidate them into a uniform presentation, such as for storing in a database or data warehouse.
  • Another way that companies use ETL is to move information to another application permanently. For instance, the new application might use another database vendor and most likely a very different database schema. ETL can be used to transform the data into a format suitable for the new application to use.

Real-life ETL cycle

The typical real-life ETL cycle consists of the following execution steps:
  1. Cycle initiation
  2. Build reference data
  3. Extract (from sources)
  4. Validate
  5. Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates)
  6. Stage (load into staging tables, if used)
  7. Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair)
  8. Publish (to target tables)
  9. Archive
  10. Clean up

Challenges

ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems.
The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified. Data profiling of a source during data analysis can identify the data conditions that must be managed by transform rules specifications. This leads to an amendment of validation rules explicitly and implicitly implemented in the ETL process.
Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment.
Design analysts should establish the scalability of an ETL system across the lifetime of its usage. This includes understanding the volumes of data that must be processed within service level agreements. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from daily batch to multiple-day micro batch to integration with message queues or real-time change-data capture for continuous transformation and update.

Performance

ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and lots of memory. The fastest ETL record is currently held by Syncsort,[1] Vertica and HP at 5.4TB in under an hour, which is more than twice as fast as the earlier record held by Microsoft and Unisys.
In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:
  • Direct Path Extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high speed extract
  • Most of the transformation processing outside of the database
  • Bulk load operations whenever possible.
Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:
  • Partition tables (and indices). Try to keep partitions similar in size (watch for null values that can skew the partitioning).
  • Do all validation in the ETL layer before the load. Disable integrity checking (disable constraint ...) in the target database tables during the load.
  • Disable triggers (disable trigger ...) in the target database tables during the load. Simulate their effect as a separate step.
  • Generate IDs in the ETL layer (not in the database).
  • Drop the indices (on a table or partition) before the load - and recreate them after the load (SQL: drop index ...; create index ...).
  • Use parallel bulk load when possible — works well when the table is partitioned or there are no indices. Note: attempt to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices.
  • If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately. You often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL).
Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using distinct may be slow in the database; thus, it makes sense to do it outside. On the other side, if using distinct significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.
A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job "B" cannot start while job "A" is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of parallelism, and making "chains" of consecutive processing as short as possible. Again, partitioning of big tables and of their indices can really help.
Another common issue occurs when the data are spread among several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases - and this can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:
  • Sources
  • Central ETL layer
  • Targets
This allows processing to take maximum advantage of parallel processing. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into 1st - and then replicating into the 2nd).
Sometimes processing must take place sequentially. For example, dimensional (reference) data are needed before one can get and validate the rows for main "fact" tables.

Parallel processing

A recent development in ETL software is the implementation of parallel processing. This has enabled a number of methods to improve overall performance of ETL processes when dealing with large volumes of data.
ETL applications implement three main types of parallelism:
  • Data: By splitting a single sequential file into smaller data files to provide parallel access.
  • Pipeline: Allowing the simultaneous running of several components on the same data stream. For example: looking up a value on record 1 at the same time as adding two fields on record 2.
  • Component: The simultaneous running of multiple processes on different data streams in the same job, for example, sorting one input file while removing duplicates on another file.
All three types of parallelism usually operate combined in a single job.
An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary.

Rerunnability, recoverability

Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with "row_id", and tag each piece of the process with "run_id". In case of a failure, having these IDs help to roll back and rerun the failed piece.
Best practice also calls for checkpoints, which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, and so on.

Virtual ETL

As of 2010 data virtualization had begun to advance ETL processing. The application of data virtualization to ETL allowed solving the most common ETL tasks of data migration and application integration for multiple dispersed data sources. So-called Virtual ETL operates with the abstracted representation of the objects or entities gathered from the variety of relational, semi-structured and unstructured data sources. ETL tools can leverage object-oriented modeling and work with entities' representations persistently stored in a centrally located hub-and-spoke architecture. Such a collection that contains representations of the entities or objects gathered from the data sources for ETL processing is called a metadata repository and it can reside in memory[2] or be made persistent. By using a persistent metadata repository, ETL tools can transition from one-time projects to persistent middleware, performing data harmonization and data profiling consistently and in near-real time.[citation needed]

Dealing with keys

Keys are some of the most important objects in all relational databases, as they tie everything together. A primary key is a column that identifies a given entity, where a foreign key is a column in another table that refers a primary key. These keys can also be made of several columns, in which case they are composite keys. In many cases the primary key is an auto generated integer that has no meaning for the business entity being represented, but solely exists for the purpose of the relational database - commonly referred to as a surrogate key.
As there is usually more than one data source being loaded into the warehouse, the keys are an important concern to be addressed.
Your customers might be represented in several data sources, and in one their SSN (Social Security Number) might be the primary key, their phone number in another and a surrogate in the third. All of the customers information needs to be consolidated into one dimension table.
A recommended way to deal with the concern is to add a warehouse surrogate key, which is used as a foreign key from the fact table.[3]
Usually updates occur to a dimension's source data, which obviously must be reflected in the data warehouse.
If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row. If the source data uses a surrogate key, the warehouse must keep track of it even though it is never used in queries or reports.
That is done by creating a lookup table that contains the warehouse surrogate key and the originating key.[4] This way the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved.
The lookup table is used in different ways depending on the nature of the source data. There are 5 types to consider,[5] where three selected ones are included here:
Type 1:
- The dimension row is simply updated to match the current state of the source system. The warehouse does not capture history. The lookup table is used to identify the dimension row to update or overwrite.
Type 2:
- A new dimension row is added with the new state of the source system. A new surrogate key is assigned. Source key is no longer unique in the lookup table.
Fully logged:
- A new dimension row is added with the new state of the source system, while the previous dimension row is updated to reflect it is no longer active and record time of deactivation.

Tools

Programmers can set up ETL processes using almost any programming language, but building such processes from scratch can become complex. Increasingly, companies are buying ETL tools to help in the creation of ETL processes.[6]
By using an established ETL framework, one may increase one's chances of ending up with better connectivity and scalability.[citation needed] A good ETL tool must be able to communicate with the many different relational databases and read the various file formats used throughout an organization. ETL tools have started to migrate into Enterprise Application Integration, or even Enterprise Service Bus, systems that now cover much more than just the extraction, transformation, and loading of data. Many ETL vendors now have data profilingdata quality, andmetadata capabilities. A common use case for ETL tools include converting CSV files to formats readable by relational databases. A typical translation of millions of records is facilitated by ETL tools that enable users to input csv-like data feeds/files and import it into a database with as little code as possible.
ETL Tools are typically used by a broad range of professionals - from students in computer science looking to quickly import large data sets to database architects in charge of company account management, ETL Tools have become a convenient tool that can be relied on to get maximum performance. ETL tools in most cases contain a GUI that helps users conveniently transform data as opposed to writing large programs to parse files and modify data types—which ETL tools facilitate as much as possible.

What is the difference between a ods and staging area
ODS is nothing but the Operational Data Store which holds the data when the business gets started. It means , it holds the history of data till yesterdays data(depends upon customer requirement). Some...

Yes, ODS is a Open Data Source where it contains real time data (because we should apply any changes on real time data right..!) so dump the real time data into ODS called Landing area later we get the data into staging area here is the place where we do all transformation.


It requires 2 steps:

1.Select count(*) from source
Select count(*) from target

2. If source and target tables have same attributes and datatype

Select * from source
MINUS
Select * from target
Else
We have to go for attribute wise testing for each attribute according to design doc.

 I would do some of the following: "sql Select COLUMN, count(*) from TABLE group by COLUMN order by COLUMN Select min(COLUMN), max(COLUMN) from TABL...


you can use the count query as one way of verification:
Say in a source there are 1 lakh records and in target there are 1 lakh records. Some records in a column having the mismatch value (.Ie., instead of displaying 1000 in a column, it is displaying as 100 in target column) the source file is from an db2 and, Oracle (different source) and, target is an...

Select
Column_A
, Count(*)
from Table_A
Group by Column_A
Order by Column_A 


an active transformation can change the number of rows as output after a transformation, while a passive transformation does not change the number of rows and passes through the same number of rows that was given to it as input


.

Java Springs

Spring Interview Questions

1) What is Spring Framework?
   Spring is a lightweight inversion of control and aspect-oriented container framework. Spring Framework’s contribution towards java community is immense and spring community is the largest and most innovative community by size. They have numerous projects under their portfolio and have their own spring dm server for running spring applications. This community is acquired by VMWare, a leading cloud compting company for enabling the java application in the cloud by using spring stacks. If you are looking to read more about the spring framework and its products, please read in their official site Spring Source.

2) Explain Spring?
  • Lightweight : Spring is lightweight when it comes to size and transparency. The basic version of spring framework is around 1MB. And the processing overhead is also very negligible.
  • Inversion of control (IoC) : Loose coupling is achieved in spring using the technique Inversion of Control. The objects give their dependencies instead of creating or looking for dependent objects.
  • Aspect oriented (AOP) : Spring supports Aspect oriented programming and enables cohesive development by separating application business logic from system services.
  • Container : Spring contains and manages the life cycle and configuration of application objects.
  • Framework : Spring provides most of the intra functionality leaving rest of the coding to the developer.


    3) What are the different modules in Spring framework?
    The Core container module
    • Application context module
    • AOP module (Aspect Oriented Programming)
    • JDBC abstraction and DAO module
    • O/R mapping integration module (Object/Relational)
    • Web module
    • MVC framework module

    4) What is the structure of Spring framework?
           DAO
           ORM
           AOP
           JEE
          CORE
          WEB

    5) What is the Core container module?
      This module is provides the fundamental functionality of the spring framework. In this module BeanFactory is the heart of any spring-based application. The entire framework was built on the top of this module. This module makes the Spring container.

    6) What is Application context module?
      The Application context module makes spring a framework. This module extends the concept of BeanFactory, providing support for internationalization (I18N) messages, application lifecycle events, and validation. This module also supplies many enterprise services such JNDI access, EJB integration, remoting, and scheduling. It also provides support to other framework.

    7) What is AOP module?
    The AOP module is used for developing aspects for our Spring-enabled application. Much of the support has been provided by the AOP Alliance in order to ensure the interoperability between Spring and other AOP frameworks. This module also introduces metadata programming to Spring. Using Spring’s metadata support, we will be able to add annotations to our source code that instruct Spring on where and how to apply aspects.

    8) What is JDBC abstraction and DAO module?
    Using this module we can keep up the database code clean and simple, and prevent problems that result from a failure to close database resources. A new layer of meaningful exceptions on top of the error messages given by several database servers is bought in this module. In addition, this module uses Spring’s AOP moduleto provide transaction management services for objects in a Spring application.

    9) What are object/relational mapping integration module?
    Spring also supports for using of an object/relational mapping (ORM) tool over straight JDBC by providing the ORM module. Spring provide support to tie into several popular ORM frameworks, including HibernateJDO, and iBATIS SQL Maps. Spring’s transaction management supports each of these ORM frameworks as well as JDBC.

    10) What is web module?
    This module is built on the application context module, providing a context that is appropriate for web-based applications. This module also contains support for several web-oriented tasks such as transparently handling multipart requests for file uploads and programmatic binding of request parameters to your business objects. It also contains integration support with Jakarta Struts.

    11) What is Spring Mvc?
    Spring comes with a full-featured MVC framework for building web applications. Although Spring can easily be integrated with other MVC frameworks, such as Struts, Spring’s MVC framework uses IoC to provide for a clean separation of controller logic from business objects. It also allows you to decoratively bind request parameters to your business objects. It also can take advantage of any of Spring’s other services, such as I18N messaging and validation.

    12) What is a BeanFactory?
    A BeanFactory is an implementation of the factory pattern that applies Inversion of Control to separate the application’s configuration and dependencies from the actual application code.

    13) What is AOP Alliance?
    AOP Alliance is an open-source project whose goal is to promote adoption of AOP and interoperability among different AOP implementations by defining a common set of interfaces and components. We can use Spring AOP module or we can integrate services with AspectJ. We can only advice method join points using Spring AOP.
      Spring AOP works on concept of proxies using JDK dynamic proxies or CGLIB. 

    14) What is Spring configuration file?
    Spring configuration file is an XML file. This file contains the classes information and describes how these classes are configured and introduced to each other.

    15) What does a simple spring application contain?
    These applications are like any Java application. They are made up of several classes, each performing a specific purpose within the application. But these classes are configured and introduced to each other through an XML file. This XML file describes how to configure the classes, known as the Spring configuration file.

    16) What is XMLBeanFactory?
    BeanFactory has many implementations in Spring. But one of the most useful one is org.springframework.beans.factory.xml.XmlBeanFactory, which loads its beans based on the definitions contained in an XML file. To create an XmlBeanFactory, pass a java.io.InputStream to the constructor. The InputStream will provide the XML to the factory. For example, the following code snippet uses a java.io.FileInputStream to provide a bean definition XML file to XmlBeanFactory.


     BeanFactory factory = new XmlBeanFactory(
           new FileInputStream("beans.xml"));
    To retrieve the bean from a BeanFactory, call the getBean() method by passing the name of the bean you want to retrieve.

    MyBean myBean = (MyBean) factory.getBean("myBean");

    17) What are important ApplicationContext implementations in spring framework?
    • ClassPathXmlApplicationContext – This context loads a context definition from an XML file located in the class path, treating context definition files as class path resources.
    • FileSystemXmlApplicationContext – This context loads a context definition from an XML file in the filesystem.
    • XmlWebApplicationContext – This context loads the context definitions from an XML file contained within a web application.
    18) Explain Bean lifecycle in Spring framework?
    The spring container finds the bean’s definition from the XML file and instantiates the bean.
    1. Using the dependency injection, spring populates all of the properties as specified in the bean definition.
    2. If the bean implements the BeanNameAware interface, the factory calls setBeanName() passing the bean’s ID.
    3. If the bean implements the BeanFactoryAware interface, the factory calls setBeanFactory(), passing an instance of itself.
    4. If there are any BeanPostProcessors associated with the bean, their post- ProcessBeforeInitialization() methods will be called.
    5. If an init-method is specified for the bean, it will be called.
    6. Finally, if there are any BeanPostProcessors associated with the bean, their postProcessAfterInitialization() methods will be called.
    19) What is bean wiring?
    Combining together beans within the Spring container is known as bean wiring or wiring. When wiring beans, you should tell the container what beans are needed and how the container should use dependency injection to tie them together.

    20) How do add a bean in spring application?
    <!DOCTYPE beans PUBLIC    "-//SPRING//DTD BEAN//EN"
        "http://www.springframework.org/dtd/spring-beans.dtd">
       <bean name="/helloWorld" class="test.HelloWorldController">  

    In the bean tag the id attribute specifies the bean name and the class attribute specifies the fully qualified class name.

    21) What are singleton beans and how can you create prototype beans?
    Beans defined in spring framework are singleton beans. There is an attribute in bean tag named ‘singleton’ if specified true then bean becomes singleton and if set to false then the bean becomes a prototype bean. By default it is set to true. So, all the beans in spring framework are by default singleton beans.
      < bean class="com.act.Foo"    singleton="false">


    22) What are the important beans lifecycle methods?
    There are two important bean lifecycle methods. The first one is setup which is called when the bean is loaded in to the container. The second method is theteardown method which is called when the bean is unloaded from the container.

    23) How can you override beans default lifecycle methods?
    The bean tag has two more important attributes with which you can define your own custom initialization and destroy methods. Here I have shown a small demonstration. Two new methods fooSetup and fooTeardown are to be added to your Foo class.




      class="com.act.Foo"
         init-method="fooSetup" destroy="fooTeardown">;
    24) What are Inner Beans?
    When wiring beans, if a bean element is embedded to a property tag directly, then that bean is said to the Inner Bean. The drawback of this bean is that it cannot be reused anywhere else.

    25) What are the different types of bean injections?
    There are two types of bean injections.
    1. By setter
    2. By constructor