ecogenenergy.info Laws Microsoft Sql Server 2008 Integration Services Pdf

MICROSOFT SQL SERVER 2008 INTEGRATION SERVICES PDF

Monday, April 15, 2019


Trademarks: Wiley, Wrox, the Wrox logo, Programmer to Programmer, and related With the last release of SQL Server R2, the Microsoft SSIS team did. Table of Contents Introduction A systematic method Chapter 1 – Principles and Method of the Work Do not force, do. The article describe the ETL process of integration service. Learning objectives: 1 . Retrieve data from text file. 2. How to use temporary tables in SSIS. 3.


Microsoft Sql Server 2008 Integration Services Pdf

Author:DARRICK SHATRAU
Language:English, Spanish, Japanese
Country:Lithuania
Genre:Art
Pages:149
Published (Last):22.10.2015
ISBN:472-3-79429-739-2
ePub File Size:15.83 MB
PDF File Size:10.68 MB
Distribution:Free* [*Regsitration Required]
Downloads:30749
Uploaded by: ASIA

Analysis Services integration with SharePoint SQL Server R2 introduces a new option to individually select which feature components to install. Professional Microsoft SQL Server Analysis Services with MDX shows readers how to build data warehouses and multidimensional. Since this book is about SQL Server Integration Services, you will get of the source code used in this book is available for download at ecogenenergy.info

You can have project parameters and package parameters. In general, if you are deploying a package using the package deployment model, you should use configurations instead of parameters. Precedence constraints Tasks are linked by precedence constraints. The precedence constraint preceding a particular task must be met before that task executes.

The run time supports executing tasks in parallel, if their precedence constraints so allow. Constraints may otherwise allow different paths of execution depending on the success or failure of other tasks. Together with the tasks, precedence constraints comprise the workflow of the package.

Tasks A task is an atomic work unit that performs some action.

SQL Server now enables you to manage the policies on hundreds of SQL Servers in your environment as if you were managing a single instance. Administrators or DBAs support the production servers and often inherit the database from the developer. This book is intended for developers, DBAs, and casual users who hope to administer or may already be administering a SQL Server system and its business intelligence features, such as Integration Services.

This book is a professional book, meaning the authors assume that you know the basics about how to query a SQL Server and have some rudimentary concepts of SQL Server already. For example, this book does not show you how to create a database or walk you through the installation of SQL Server using the wizard.

Instead, the author of the installation chapter may provide insight into how to use some of the more advanced concepts of the installation.

SSIS Interview Questions And Answers For Experienced

The first ten chapters of the book are about administering the various areas of SQL Server, including the developer and business intelligence features. Chapters 2 and 3 dive into best practices on installing and upgrading to SQL Server This chapter also describes some of the hidden tools you may not even know you have. Our goal in writing this book was to focus on solving problems, building solutions, and providing design best practices.

In summary, the difference between this SSIS books and all the others out there is that other books simply focus on the product features with little emphasis on solution design. If you go out and buy a new power saw, the manual is going to tell you how to angle the blade, set the right depth, and make a clean cut. This book shows you how to build the furniture, not just how to use the saw. To be sure, you must know how to use SSIS before you can build a solution. But going from knowledge to design requires guidance on the right approach, and how to avoid the common pitfalls.

This book empowers you with the confidence, the knowledge, and the understanding to make the right choices in your ETL design that enables easy administration, meets your data processing requirements, and performs well for current and future scalability. Introduction Because this book focuses on problems and solutions, a base understanding of SSIS is required.

A couple of areas of the book walk you through the more advanced features of SSIS, but most of the book builds on top of a foundation of SSIS knowledge and experience. If you have taken an SSIS class, or have read another book and tried out SSIS, or you have built a few packages for various purposes, then that base knowledge will give you enough background for this book. But you should be up for the challenge of learning a new tool in the context of applying it!

The perfect reader of this book is someone who is in the early stages of a new ETL or data integration project or a redesign project , and is eager to know how to approach the effort with the best practices when designing SSIS packages. If you are supporting an existing project and must make some changes to aid in administration, deployment, or scalability, then you will also benefit from several of the chapters herein.

Wiley, The authors of this book have expanded the coverage to address the current trends in ETL and SSIS, including creating a scaling-out execution model, performing advanced data profiling and cleansing, and handling file management and file processing. This book also addresses some of the challenges in SSIS surrounding auditing, configurations, and execution management.

Two chapters focus on solving these administrative challenges: Every ETL or data integration solution involves data extraction of one kind or another. Regardless, you must implement a data extraction methodology that is efficient, reduces the impact on the source, and adequately handles changes and data tracking. Chapter 5 dives into many of the data extraction areas, and even provides you with a dynamic data extraction approach.

Another area that this book covers is data warehouse ETL. Chapter 12 provides performance troubleshooting steps and best practices on data flow design.

This chapter also contrasts the right use of SQL commands versus the data flow. Before you even begin diving into the details of an SSIS-based solution, you must start out on the right foot! In all, this book presents a comprehensive picture of SSIS solution challenges and design best practices. In fact, some chapters are structured with more than one Problem—Design—Solution grouping. Each collection of Problem—Design—Solution addresses the following: This book is generally organized in the way that you would approach a data integration or ETL project.

The chapter flow builds on the natural progression that a developer or administrator would go through when designing an SSIS solution. After beginning with an overview of architecture, the book then moves into putting together the underlying support structure of a solution — the storage, deployment, and management framework.

Next, the natural progression is to handle the source data, whether that is in files or extracted from a relational database system RDBMS , and often requiring a data-cleansing process. Next, the chapters delve into dimension and fact table loading, as well as the cube-processing steps. The final chapters address advanced data handling through scripting, and provide the package availability and performance that many solutions require. The following chapters examine SSIS administration and the deployment foundation: The following chapters discuss file management and data extraction: The following chapter discusses data cleansing: The following chapters cover data warehouse ETL: The following chapters examine advanced ETL concepts: The Problem—Design—Solution format and order of the chapters together provide a well thought-out, organized, and systematic approach to SSIS solution design.

The samples are available on Microsoft Open Source community site, www. Each chapter has the source code. Boxes like this one hold important, not-to-be forgotten information that is directly relevant to the surrounding text. Tips, hints, tricks, and asides to the current discussion are offset and placed in italics like this. As for styles in the text: In code examples we highlight new and important code with boldfaced text.

All of the source code used in this book is available for download at www. Once you download the code, just decompress it with your favorite compression tool. Alternately, you can go to the main Wrox code download page at www. Errata We make every effort to ensure that there are no errors in the text or in the code.

However, no one is perfect, and mistakes do occur. If you find an error in one of our books such as a spelling mistake or faulty piece of code , we would be very grateful for your feedback.

By sending in errata, you may save another reader hours of frustration, and, at the same time, you will be helping us provide even higherquality information.

To find the errata page for this book, go to www. Then, on the book details page, click the Book Errata link.

Detalles del producto

On this page, you can view all errata that has been submitted for this book and posted by Wrox editors. The forums are a Web-based system for you to post messages relating to Wrox books and related technologies, and to interact with other readers and technology users. The forums offer a subscription feature to email you topics of interest of your choosing when new posts are made to the forums.

Wrox authors, editors, other industry experts, and your fellow readers are present on these forums. To join the forums, just follow these steps: Go to p2p. You will receive an email with information describing how to verify your account and complete the joining process.

Read the terms of use and click Agree. Complete the required information to join, as well as any optional information you wish to provide, and click Submit. You can read messages in the forums without joining P2P, but to post your own messages, you must join.

Once you join, you can post new messages and respond to messages other users post. You can read messages at any time on the Web. For more information about how to use the Wrox P2P, be sure to read the P2P FAQs for answers to questions about how the forum software works, as well as many common questions specific to P2P and Wrox books.

You will have responsibility on the data and processing layer of the solution, which involves processing data — a lot of data — from several sources, and then either integrating systems, or maybe consolidating data for reporting. The project manager approaches you and says that the Vice President of Technology has asked the team to give him an estimate of the infrastructure needed. Furthermore, the business owner wants a high-level overview of how the solution architecture will help the company achieve the business need most efficiently.

The project manager also wants your thoughts on the best way to approach the solution, how the development should be coordinated between team members, and how deployment should be handled. Where do you start? How should you approach the solution design with SSIS as the main technology? How should all the pieces work together?

Chapter 1: And, in fact, this whole book is about SSIS solutions to real-world requirements and challenges. It addresses questions such as the following: Before you dive into the technical challenges of a project, you must first step back and ensure that you are laying the right foundation. Jumping right in is tempting! But resist the urge, because you want to and need to set the precedence and patterns for the solution upfront. As with all chapters in this book, this chapter is organized into the following three major sections: This section launches you into the rest of the book, and shows how you can follow the chapters to build or redesign your SSIS solution.

Problem Data and ETL projects have many challenges. Some challenges relate to data, some to enterprise integration, some to project coordination, and some to general expectations. This section begins by looking at the bigger picture of data within an organization, but then quickly looks at ETL projects and SSIS packages and execution. Macro Challenge: The problem is that it can still cause a ripple effect when you tie it into your environment.

Or, you can have challenges caused by an unwieldy enterprise environment when you try to implement your solution. It grew into this twisted unorganized process because of poor planning, coordination, and execution. Departments hire their own technical people and try to go around IT.

Project pressures such as time and budget cause designers to cut corners. The core source data has been connected through so many precedence links that it takes more time and administrative and development overhead. Systems at the source and in the middle of dependencies become un-replaceable because of the amount of effort that switching to a new system would take.

Processes run at uncontrolled times, and may impact systems within the processes even during peak times, which affects work efficiency. They may break processes, or cause data integration or reporting applications to be inaccurate. When they break, customer perception and employee efficiency are affected. Micro Challenge: Data-Processing Confusion Another common problem with data processing is when the logic contained to process data is overly complicated and confusing.

Just like the macro enterprise problem, this problem usually is the result of changes over time where logic is modified and appended. It usually comes in one of two ways: Supporting the procedures is also very difficult because the logic is difficult to follow, and, many times, the developers or DBAs who wrote the code are unavailable.

Overall, this type of process requires a lot of administration and wasted time spent on following and learning the process. These kinds of packages have challenges similar to those of runaway stored procedures, such as troubleshooting and the learning curve required for the process.

Figure shows the control flow of a package that has too many components to effectively manage. The SSIS designer is zoomed in at 50 percent to fit on the screen. SSIS Solution Architecture Figure The overly complex control flow shown in Figure is similar to an overly complex data flow, where too many components are used, thus making the development, troubleshooting, and support difficult to manage.

In summary, both of these types of processes runaway procedures and unmanageable packages are very difficult to support, and not suited to team development, error handling, and scalability all of which are addressed in Chapter Problems with Execution and Troubleshooting A couple of other issues that often come up in an ETL or data-integration solution are poor process coordination and difficulty doing root cause analysis.

Figure 6 Chapter 1: SSIS Solution Architecture If you were to consider spending time trying to work through this output when trying to figure out what went wrong, then you should consider implementing a better execution and auditing structure. This includes package execution in your development environment.

If you have just turned on the out-of-the-box SSIS logging and are capturing results to output to a table, it still may not be enough. If you write custom queries every time against the SSIS logging table to figure out what happened, then you also need a better strategy. Related to that, where should you run your SSIS packages taking into consideration sources, destinations, and other applications, while balancing hardware scalability and location within your network topology?

These questions are not trivial, and the answers depend on a lot of factors, including processing windows, source and destination availability, application impact and availability, network bandwidth, fault-tolerance requirements, and so on. And this challenge is not just about trying to get the greatest throughput on a single drive. You must consider staging and temporary environments, logging, and current and historical data. And you must balance it all with hardware availability and budget.

The processes usually move or integrate thousands or millions of records. That can be a lot of data that moves between systems, and it generates a lot of disk activity. When inserting or updating a lot of data, the server must wait until the data is committed for the process to be complete. Other Challenges The list of common project challenges can go on and on, but here are a few more: Be realistic. But, because you are trying to solve problems, you are going to be dealing with people.

Design Now that you are scared, step back and take a deep breath. Designing an ETL process is doable, and, with the right approach in mind, you can be successful. This section discusses the overall design approach to an SSIS-based solution by examining the following: You are probably reading it because you assume that SSIS is the right tool for the job. However, be sure to consider what you are doing, and ensure that using SSIS is in line with what you are doing.

Think about all the different types of data-processing needs that you have across your organization: Some are created for specific situations such as folder synchronizing tools , whereas other tools are designed to perform a variety of functions for different situations. So, the traditional question often posed is which tool can best meet the business and logical requirements to perform the tasks needed?

Consider the host of tools found in the ever-evolving Microsoft toolset. Each of these tools plays a role in the data world. Although overlaps exist, each tool has a distinct focus and target purpose. The challenge everyone faces entails time and capacity. There is no way everyone can be an expert across the board. Therefore, developers and administrators alike should be diligent about performing research on tools and technologies that complement each other, based on different situations.

For example, many organizations use BizTalk for a host of purposes beyond the handling of business-tobusiness communication and process workflow automation. In many cases, thousands of dollars have been spent on an ETL tool that takes too long to master, implement, and support.

Beyond the standard functionality questions you should ask about a tool, be sure to also consider the following: But there are certainly levels of efficiency that can be gained when your SSIS solution is planned and implemented thoughtfully. Figure 10 Chapter 1: Maybe some time is saved and that is even questionable , but in the end, more time and money will be wasted.

A solution architecture should have several key data-processing objectives. The following apply to SSISbased solutions, but also relate generically to any data-processing solution architecture: Do not build a separate data silo, especially if your effort is a data warehouse or data mart — that causes multiple versions and variations of the data.

Be sure to follow the previous bullet point. This does not require limiting a scale-out architecture, but simply that the support structures are centralized. This information will go a long way in supporting a system. In addition, you should have a way to track data back to the source. This tracking is critical for both data validation and troubleshooting data errors.

Plan for restarting at interim points after the issues are identified. Doing so also enables you to easily compartmentalize changes in the ETL solution. These objectives represent the larger picture of an overall solution architecture. Other aspects, of course, are important and situational to what you are building in SSIS. Two common types of data-processing efforts are discussed in the following sections: These fit well into the SSIS toolset.

For example, you may want to create a business-to-business portal site, and you may need the site to interface with the source data on the mainframe. In this case, you may get the data delivered in nightly extracts from the mainframe and load it into your SQL Server table. This type of process involves moving files, and then processing the data, which may involve de-duping removing duplicates , combining files, cleaning bad data, and so on. Two systems may also need to talk to one another or pass business keys in order for records to be matched between environments.

Figure shows an example solution architecture that integrates data between different systems in an enterprise. This data process contains some aspects that are bidirectional, and other parts that perform extraction and loads. Data staging is used in this example to help integrate the data, and a data store is used to centralize many of the corporate data processes which helps alleviate the long chains of system dependencies.

Of course, other variations of system integration solutions exist, such as consolidation of different divisional data, especially when companies go through mergers and acquisitions. Data warehousing focuses on decision support, or enabling better decision making through organized accessibility of information.

As opposed to a transactional system such as a point of sale POS , Human Resources HR , or CRM that is designed to allow rapid transactions to capture information data, a data warehouse is tuned for reporting and analysis.

SQL Server Integration Services

Because data warehousing is focused on the extraction, consolidation, and 12 Chapter 1: Processing ETL for data warehousing involves extracting data from source systems or files, performing transformation logic on the data to correlate, cleanse, and consolidate , and then loading a data warehouse environment for reporting and analysis.

Figure shows common data-processing architecture for a data warehouse ETL system. Did you know that ETL typically takes up between 50 and 70 percent of a data warehousing project? That is quite a daunting statistic. What it means is that even though presenting the data is the end goal and the driving force for business, the largest portion of developing a data warehouse is spent not on the presentation and organization of the data, but rather on the behind-the-scenes processing to get the data ready.

Whether your overall objective is system integration or warehouse ETL, you should give consideration to using an agile development methodology. An agile methodology is an iterative approach to development. You add features of the solution through smaller development cycles, and refine requirements as the solution progresses.

Agile Benefits Even if your solution does not involve a user interface such as a system integration , an agile approach enables you to tackle aspects of the solution in smaller development cycles, and to troubleshoot data issues along the way. Following are some the general benefits of this approach: Tasks can change as a better understanding of the requirements is defined.

In essence, project communication is clearer for all parties — developer, management, and ownership. These are highlighted and addressed soon in the process. Agile Cautions and Planning However, you must exercise some caution. Do not use an agile methodology to foster bad architecture practices. You must ensure that you have an overall solution architecture, and your agile tasks must fit in that plan and support the cause.

Therefore, whatever project methodology you use, be sure to push for an upfront plan and architecture. Following are a few things to consider in your development process: Be sure to set expectations with the storage group or vendor early on in the process.

Chapter 3 provides an in-depth discussion of this topic.

If you leave out this planning step, you will likely underestimate the overall solution scope. Data Element Documentation Not many developers or system architects are fans of documentation — or at least writing documentation.

However, it is a necessary task in any data-centric or ETL project. Again, this book is more about SSIS solutions than project management, but given the importance of tracking data, included here are some recommendations on data-tracking documentation that can help you in your project.

Data documentation is about tracking the source and destination data elements, data profiling, and mapping the data elements to requirements. You must be diligent about these tasks, and keep them upto-date, because doing so can help you keep control of the data your project uses.

Documentation is also useful in future administration and lineage. The following data-tracking documents should be used above and beyond your usual documentation requirements, conceptual design, physical design, ETL design, and so on. Source Data Dictionary, Profile, and Usage The source data dictionary is about more than just a definition of what the data elements represent. Planning sessions can then refer to the source dictionary to help validate requirements and data availability.

You should structure this in two sections: Table provides some details for entity tracking. SSIS Solution Architecture Table Item Description Table or filename This names the file or table and any ongoing naming conventions such as name variations if different systems are involved, or if files have timestamps included.

Source and definition Describes the source system where the data originates, and general data that the file contains. Number of initial records and size If the solution includes an initial data load, this represents the number of records that are included in the initial data, and the size of the file or table.

Number of incremental records and size For ongoing data loads, this describes how many records are involved in the incremental source data, and the size of the incremental file or table. Entity usage How the source table or file is used in the solution. Table provides some details for element tracking. Table Item Description Source table or file The table of the file that the element is sourced from.

Source column or field Name of the table column of field from the file. Definition Describes the usage of the element in the source.

Data profile analysis An analysis of the data profile — completeness, value variations, dependencies on other elements, key usage, or uniqueness. Element usage Lists the destination tables and columns that this source element is used in, which will be important to keep up-to-date. Destination Data Dictionary, Usage, and Mapping Tracking the destination elements so that you can use them to understand what the elements are for, where they came from, and how they are used is also important.

The destination dictionary describes the elements, but also describes the mapping from the source. This is invaluable in the ETL process. Again, you should include both an entity mapping and an element mapping description. Table Item Description Table name This is the destination table name, schema, and database that the table is used in.

Table description Describes the use of the table in the overall entity-relationship diagram ERD , and what general records and grain are included in it. Keys and grain Lists the primary key and any candidate keys in the table, and the data grain of the table. Number of initial records This is the count of the number of expected rows in the table.

Yearly record growth Estimates the number of additional rows that will be added to the table. Source entity mapping Lists the source tables of files that are involved in the population of the table. Table Item Description Destination table name The table and schema that the column is in.

Destination column name Name of the table column. Column description Describes the usage of the column within the source. Data type description Describes the expected data types and ranges used in the column. Usage type Describes the type of usage for the column, such as a primary key, candidate key, foreign key, auditing column, descriptor column, and so on.

Source mapping Lists the source fields used to populate the column, and describes the detailed mapping and transformations needed from the source to the data elements in the column. This is crucial for ETL processing. SSIS Solution Architecture Just as a review, this discussion only addresses the tracking of data elements, and is supplementary to the overall solution documentation.

You may have other related data documentation, or you may choose to include additional items in your documentation such as partitioning strategy of the destination table, or other pertinent things about the source data availability or data processing.

Package Design Patterns The way you design your packages is important for the team development, deployment, future changes, ongoing support, and maintenance. A better approach is available through the use of modular packages and master packages.

Modular Packages Instead of putting a lot of your data processing in a single package, focus your packages so that the processing logic contained in them is manageable, and the precedence is not overly complicated. This is called modular package development, and it provides the following benefits: Also, a single modular package is easier to unit test. What does a modular package look like?

Package designs vary, depending on the solution and requirements. But a good general rule is to keep the components visually manageable in the package designer without requiring a lot of scrolling to find aspects of the solution. Figure shows a package control flow that demonstrates a modular package. In all, ten tasks are in the control flow, which is a very manageable group. Master Packages The way to still keep your complicated order of data processing or precedence is to coordinate the execution of the modular packages through a master package.

A master package or parent package uses 19 Chapter 1: Logging and auditing can be included to help facilitate an overall execution auditing and administrative support. Figure shows an example parent package.

Figure 20 Chapter 1: In many solutions, you will need to execute a set of packages at different times and with different precedence. The master package allows this, and helps implement a rollback and checkpoint system. Chapter 2 provides more coverage of this topic when discussing the building of a package framework. Server Hardware Here are some general principles to follow concerning the server hardware: This is good news.

Larger solutions are really dependent on so many factors, and you should also consider scale-out ETL which is discussed in Chapter Thus, recommending a general rule — especially if you are building an SSIS server that will run all your enterprise ETL operations — is difficult. Again, there is so much context that will really drive the hardware requirements, that you must evaluate your situation and customize a recommendation on what you see in your solution.

If your budget is restricted, and your ETL process is not mission-critical to your business, then your test environment can be a scaled-down version of your production servers or a virtual server. One option to save on cost is to use the same server for both development and testing, and then you may be able to use equivalent hardware.

Use different database instances and packages for your testing. One aspect of setting up test servers that is critical is that the number and naming of your volumes drive letters must match between all your environments. In other words, if you have G: Doing so can alleviate a lot of deployment pain. The next section clarifies the execution location of your environment. The more throughputs you can generate with your disk subsystem, the better. How do you estimate hardware needs?

Doing so is very difficult at the start of a solution, but if you can get a gut sense of the record volume and growth, then you can probably do it. A good DBA will be able to help estimate the table sizes by taking the estimated row width, multiplying that number by the expected rows, and then adding some overhead for indexing and growth. You must consider a lot of factors such as, the SQL Server data page width and free space. You can get a lot more throughput for the drives because you can stripe more drives.

Set this expectation upfront! SANs come at a higher price, but have the benefit of adding better redundancy, caching, controller throughput, more drives in a stripe, fault tolerance clusters in different cities , advanced disk mirroring where a mirror can be split and mounted on other servers , dual read in a mirror where both drives in a mirror can be read at the same time , and so on.

Nanda A. Microsoft SQL Server 2008 Integration Services

DAS has the benefit of cost a fraction of the cost of a SAN , but can also achieve similar throughput and, in some cases, faster throughput, but without the caching and easier control of the setup and configuration. A lot of varying and seemingly contradictory recommendations are out there, but each is based on a set of assumptions for different types of data-centric solutions. Be careful to understand those assumptions in your decision. Your objective is to leverage the servers and network bandwidth that can handle the impact load from package execution, but without impacting resources that need primary performance.

When it comes to where a package should be executed, there is no absolute answer. However, some general principles can direct one architecture design over another.

Package Storage Location Versus Execution Location When it comes to running a package, a difference exists between where a package is run and where that package is stored. You can store a package as a file and put it in a file system folder, or you can load a package into the msdb system database in SQL Server Either way, when the package is executed, the storage location is merely where the metadata of that package lives.

The package is loaded from that source location through an execution method, and run on the machine where the execution is kicked off.

In other words, if you are running a package through the command line or through a job, the package will run on the machine where the command line is called, or the job runs. Figure shows the storage location server on the left and the execution server on the right.

The package is executed on the server on the right, even though the package is stored on the server on the left. To be sure, the workflow coordination will still be handled on your SSIS execution machine, but the actual SQL code would be run on a different machine. This is different from the Data Flow Task, which runs on the machine where the package is executed. Package Execution and the Data Flow For your packages that have data flows which is probably most of your packages , you should understand what happens to the data based on where you execute that package with the embedded data flow.

Additionally, understanding where the data flow execution impact will be dictates where you decide to run your packages. The data flow impact on the package execution server involves the resources needed to manage the data buffers, the data conversion requirements as data is imported from sources, the memory involved in the lookup cache, the temporary memory and processor utilization required for the Sort and Aggregate transformations, and so on.

Essentially, any transformation logic contained in the data flows is handled on the server where the package is executed. The following examples are common configurations for where data is sourced, the destination location, and where packages are executed.

Obviously, data flows can be varied and complex with multiple sources and destinations, so this simplification provides the framework with single-source locations and single-destination locations.

Packages Executed on the Source or Destination Servers The most common example is when a package that contains a data flow is executed on either the source or destination server, assuming they are separate. Figure shows the data path and impact on the environment when the package is executed on the machine where the source data is located. Following are some of the benefits of this approach: Following are some of the drawbacks of this approach: This approach is very useful if you have users querying and using the destination during the day, and your SSIS processing requirements can be handled through nightly processes.

In Figure , an SSIS package is executed on a second server, and, in this diagram, both the source and destination are on the same machine. However, it would reduce the resource impact on the data source and destination server. Using a standalone SSIS server if your sources and destinations are not on the same physical machines makes more sense. Figure highlights this architecture. This architecture also provides a viable SSIS application server approach, where the machine can handle all the SSIS processing packages no matter where the data is coming from and going to.

Design Review As you can see, you have a lot to juggle at once when you are planning and building an ETL or dataintegration solution.

In many cases such as the infrastructure , all you need to do is set the ball in motion with your IT hardware group. The project is in motion, expectations are set with the stakeholders, and you have laid the foundation to a successful project. The next step is about designing and implementing your SSIS solution. Just when you think that you have a handle on things, you now have to dive into the details of data and processes!

This section launches you into your SSIS design. The next couple chapters provide you with the underlying SSIS support structure for your solution — the storage, deployment, and management framework.

Next, as you delve into the data, you will be dealing with source data, whether in files or extracted from a relational database management system RDBMS and often requiring a data-cleansing process. Chapters 4—6 cover files, data extraction, and cleansing.

The final chapters address advanced package scalability and availability, advanced scripting for those really complex scenarios, and performance tuning and design best practices. Setting the Stage: Management and Deployment One of the very first things you must design is an SSIS package template that integrates with a management and auditing environment. You must do this upfront, because retrofitting your logging and auditing while your packages are being designed is very difficult.

Chapter 2 examines building an SSIS management framework for this purpose. A management framework is about knowing the what, when, and why of when a package executes.Chapter 1: SSIS Solution Architecture Figure The overly complex control flow shown in Figure is similar to an overly complex data flow, where too many components are used, thus making the development, troubleshooting, and support difficult to manage.

You may not need to include all variables in the audit trail of values changing. That can be a lot of data that moves between systems, and it generates a lot of disk activity. Related to that, where should you run your SSIS packages taking into consideration sources, destinations, and other applications, while balancing hardware scalability and location within your network topology? Related to that, where should you run your SSIS packages taking into consideration sources, destinations, and other applications, while balancing hardware scalability and location within your network topology?

This section looks at the code for creating the objects and adding data, beginning with descriptions of the tables and what data each column contains.

XOCHITL from Coral Springs
Feel free to read my other posts. I take pleasure in valencian pilota. I am fond of studying docunments inwardly .