In the two previous blog posts (What Does the “P” in PLM Really Mean? and What Does the “L” in PLM Really Mean?) I discussed the object being managed within the product lifecycle management (PLM) methodology. Now, it is the time to move on to the last word—“management.” Management is such a general term nowadays, that simply looking at it won’t give you much idea of what it is about in the PLM context. If your organization is looking for a PLM solution, investigating the functionality that various PLM solutions can provide will help you better understand what a PLM system should be handling. However, I’d suggest establishing some high-level ideas about what a PLM system should be able to manage before you are overwhelmed by the functionality flood.
Improving the Productivity Related to Product Definition
Product definition information determines what your offerings to your customers are, and how you will accomplish those offerings. As such, productivity in generating, distributing, and consuming product definition information is critical to today’s businesses. PLM tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) are the most direct contributors to the productivity of generating product definition information. However, these are not enough. It is not rare to see a manufacturer send part drawings to a supplier through e-mails and for the latter have a hard time loading the files, or for a design engineer to spend one hour to find an existing drawing that he can simply re-create in half an hour. These two situations are examples demonstrating that the distribution and consumption processes of product definition information should be improved—especially when the generation has become so high-speed. Hence, one of the top priorities that a PLM system should have is making sure not only that product can be designed and developed in an efficient manner but also that product information can be retrieved whenever and wherever needed.
Maintaining the Integrity of Product Definition Information
If you have ever worked in an organization that relies on shared folders to store electronic documents, you may have experienced a situation where you were working on an old version of a document without noticing that it had been updated by one of your colleagues. Inconsistency issues can become unmanageable when an organization is working on a product with thousands of parts, hundreds of people, and many suppliers involved. Thus, maintaining a high-level integrity of product definition information is another priority of a PLM system. I am in favor of the slogan used by a product data management (PDM, generally considered to be the predecessor of PLM) vendor I worked for. The slogan says “we make sure that your product data are consistent, up-to-date, and secure.” Ten years later, I still believe that this should be the bottom line of a PLM system—in terms of the integrity of product definition information.
Facilitating Collaboration Throughout the Entire Product Lifecycle
In my earlier post about the “L” (lifecycle) in PLM, I discussed that the beauty of the PLM approach is the holistic view of the entire lifecycle. Today’s market requirements demand a shortened product lifecycle, but also a more complicated work distribution in order to bring a product to the market and serve the customers successfully. This means that high-quality collaboration amongst different parties becomes a winning factor. Certainly, the productivity and integrity factors I just mentioned are components of high-quality collaboration. Besides these, visibility and interoperability are also critical in facilitating collaboration. Simply speaking, visibility allows different parties to retrieve product information in the same interpretation as the creators’, and interoperability allows users to not only see the information but also operate it for collaboration purposes. Considering the complicated IT landscape that many enterprises have (e.g., multiple CAx [CAD/CAM/CAE] tools and management systems in use), global operation, and various IT systems on the partner side), achieving high visibility and interoperability is quite challenging.
Providing an Environment for Product Sustainability
Sustainability is now a big word in companies’ strategic planning. A simple rule: if a company wants to be in business forever, the products (and/or services) it provides should be accepted by the market forever. The PLM approach can support product sustainability in two ways. On the one hand, the development and delivery of a product should be history-conscious, which means all activities within a product lifecycle should be traceable in order to achieve continuous product improvement. On the other hand, the development and delivery of a product should be future-oriented, which means that the impact that a product imposes to the environment and the long-term profitability of a company should be taken into consideration as early as possible. For more information about PLM and sustainability, please read the blog post What Can PLM Do for Green?
The above four components are what I believe should be the top four considerations while writing this last issue of the “What Does…” series. However, I’m sure that you will discover more on your own. In fact, every organization has its specificities. While you are planning, implementing, or improving your PLM system, you should have a more precise understanding of what the “P”, “L,” and “M” really mean—specifically to your organization.
Sunday, April 11, 2010
What Does the “L” in PLM Really Mean?
In an earlier post, What Does the “P” in PLM Really Mean?, I discussed what the word “product” means in product lifecycle management (PLM). In this post, I am going to move onto the next letter, “L” for lifecycle.
According to Merriam-Webster, one definition of lifecycle is “a series of stages through which something (as an individual, culture, or manufactured product) passes during its lifetime.” In a typical manufacturing environment, these stages include conception, design and development, manufacture, and service. Ideally, a PLM system should manage the entire lifecycle that covers all the stages. Originally, however, the concept of PLM was designed to address product definition authoring and, later on, define data management issues for the design department. Not every stage receives equal attention under the PLM umbrella, and the application maturity of each stage is not yet at the same level.
Conception is the earliest stage of a product lifecycle. Within this stage, ideas are the raw input and development projects or tasks are the output. New ideas for product development come from different sources such as research work, through newly available technologies, brainstorming sessions, customer requirements, and more. Some of the ideas might be incorporated into existing products as new features; some might not be feasible at the moment; a large amount might simply be eliminated; the rest (grouped or alone) might become new concepts, and some of them might finally reach the development level after evaluation. Briefly, the conception stage is a process of idea attrition—only the good ones get to the next step. In this area, management applications are not quite mature and the adoption rate is relatively low. Part of the reason might be that conception is strongly associated with creativity, and people are not yet convinced that this can be handled well by machines.
Product design and development is the main stage where abundant product definition information is generated. When a concept becomes a development project, people need tools to define not only what a product should be (product design), but also how it should be manufactured (engineering design). Computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) are all well-recognized PLM tools that support the definition, as well as some execution processes. The adoption of PLM tools increases engineers’ individual productivity tremendously—but they also need a platform to collaborate internally (with peers and other departments inside the organization) and externally (with development partners, suppliers, and customers). The application of PLM for the design and development stage is the most mature. It is an exemplary approach for most organizations to start their PLM initiatives from this stage because it produces the majority of product definition information.
Manufacture is a joint task performed by enterprise resource planning (ERP), PLM, and other systems such as manufacturing execution systems (MES). ERP takes the lead from the planning and control angles, and MES manages and monitors the production processes on the shop floor. The reasons for having a PLM system in place at this stage are:
1. PLM provides information for what and how to produce.
2. Tight connection between PLM and ERP also helps companies develop better products that are produced in a better way.
Service includes marketing, sales, distribution, repair and maintenance, retirement, and disposal processes related to a product. The quality of these services relies on the accuracy, integrity, and timeliness of the product information that is provided. In general, the more complicated a product is, the more important it is to have the product information available for the mentioned service activities. Another reason for having a PLM system is increasing environmental compliance requirements. For example, at the time when a product enters into the last stage of its lifecycle, the manufacturer has to make sure that the disposal procedure can be handled properly so that the disposition has minimum impact to the environment—especially when it is an asset type of product that lasts years or even decades. Instead of hoping that the user will keep the manual shipped with the product, the disposal instruction has to be stored and managed securely somewhere within the manufacturer’s PLM system.
Above, I discussed product lifecycle stage by stage. However, the PLM methodology won’t reach its full potential unless you take a holistic view of all the stages. Although some stages mainly generate product definition information and others mainly consume this information, it is more appropriate to think of every stage as both the consumer and provider of product definition information. The reason for having a PLM system is to facilitate information-sharing. Thus, in theory, a comprehensive PLM must cover all these stages. In practice, the reality is that not all PLM solutions support the entire product lifecycle, and the priorities of managing different lifecycle stages are different. Nevertheless, managing the entire product lifecycle should at least be a long-term vision.
According to Merriam-Webster, one definition of lifecycle is “a series of stages through which something (as an individual, culture, or manufactured product) passes during its lifetime.” In a typical manufacturing environment, these stages include conception, design and development, manufacture, and service. Ideally, a PLM system should manage the entire lifecycle that covers all the stages. Originally, however, the concept of PLM was designed to address product definition authoring and, later on, define data management issues for the design department. Not every stage receives equal attention under the PLM umbrella, and the application maturity of each stage is not yet at the same level.
Conception is the earliest stage of a product lifecycle. Within this stage, ideas are the raw input and development projects or tasks are the output. New ideas for product development come from different sources such as research work, through newly available technologies, brainstorming sessions, customer requirements, and more. Some of the ideas might be incorporated into existing products as new features; some might not be feasible at the moment; a large amount might simply be eliminated; the rest (grouped or alone) might become new concepts, and some of them might finally reach the development level after evaluation. Briefly, the conception stage is a process of idea attrition—only the good ones get to the next step. In this area, management applications are not quite mature and the adoption rate is relatively low. Part of the reason might be that conception is strongly associated with creativity, and people are not yet convinced that this can be handled well by machines.
Product design and development is the main stage where abundant product definition information is generated. When a concept becomes a development project, people need tools to define not only what a product should be (product design), but also how it should be manufactured (engineering design). Computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) are all well-recognized PLM tools that support the definition, as well as some execution processes. The adoption of PLM tools increases engineers’ individual productivity tremendously—but they also need a platform to collaborate internally (with peers and other departments inside the organization) and externally (with development partners, suppliers, and customers). The application of PLM for the design and development stage is the most mature. It is an exemplary approach for most organizations to start their PLM initiatives from this stage because it produces the majority of product definition information.
Manufacture is a joint task performed by enterprise resource planning (ERP), PLM, and other systems such as manufacturing execution systems (MES). ERP takes the lead from the planning and control angles, and MES manages and monitors the production processes on the shop floor. The reasons for having a PLM system in place at this stage are:
1. PLM provides information for what and how to produce.
2. Tight connection between PLM and ERP also helps companies develop better products that are produced in a better way.
Service includes marketing, sales, distribution, repair and maintenance, retirement, and disposal processes related to a product. The quality of these services relies on the accuracy, integrity, and timeliness of the product information that is provided. In general, the more complicated a product is, the more important it is to have the product information available for the mentioned service activities. Another reason for having a PLM system is increasing environmental compliance requirements. For example, at the time when a product enters into the last stage of its lifecycle, the manufacturer has to make sure that the disposal procedure can be handled properly so that the disposition has minimum impact to the environment—especially when it is an asset type of product that lasts years or even decades. Instead of hoping that the user will keep the manual shipped with the product, the disposal instruction has to be stored and managed securely somewhere within the manufacturer’s PLM system.
Above, I discussed product lifecycle stage by stage. However, the PLM methodology won’t reach its full potential unless you take a holistic view of all the stages. Although some stages mainly generate product definition information and others mainly consume this information, it is more appropriate to think of every stage as both the consumer and provider of product definition information. The reason for having a PLM system is to facilitate information-sharing. Thus, in theory, a comprehensive PLM must cover all these stages. In practice, the reality is that not all PLM solutions support the entire product lifecycle, and the priorities of managing different lifecycle stages are different. Nevertheless, managing the entire product lifecycle should at least be a long-term vision.
The CyberAngel: Laptop Recovery and File Encryption All-in-One
Background
According to the Computer Security Institute's 2003 Computer Crime and Security Survey, theft of private or proprietary information created the greatest financial losses for the survey respondents. If you are a medical institution, government agency, or financial institution, information theft can result in violation of patient privacy regulations, loss of customer credit card numbers, unauthorized financial transactions, or disclosure of national security secrets.
While all computers are vulnerable to information theft, laptops are particularly vulnerable due to their portability and ease of theft. Most servers are locked in racks in data centers, however laptops are typically left out on desks where access is easy. If an office visitor walked out of the office with a laptop under his or her arm, an unknowing receptionist would likely expect that it was the visitor's own laptop and not question it. If your laptop was stolen, you'd want it back. The CyberAngel, made by CyberAngel Security Solutions (CSS), is a product that claims to locate stolen laptops and return them to you. Their recovery rate on returning stolen and lost laptops to folks who have licensed their software is 88 percent. Relevant Technologies took the CyberAngel into our labs to see if version 3.0 qualified for our acceptability rating.
Installation and Use
The CyberAngel was easy to install, and the entire installation took less than ten minutes, including the time it took to reboot the test system. With version 3.0, the CyberAngel includes a new stealthy, secure drive that is protected by strong encryption. The secure drive is a logical drive protected by strong encryption where you can put all your confidential and classified information. During the installation process, you are prompted to select an encryption algorithm to use to protect your secure drive. The choices available are:
* Rijndael 128 bit
* Rijndael 256 bit
* Blowfish 128 bit
* Blowfish 448 bit
* Twofish 128 bit
* Twofish 256 bit
* DES 128
* DES 56
The nice thing about the installation program is that it provides you with background information on each of the encryption algorithms to better assist you in making your decision on which one to select. Government agencies will like the fact that the NIST AES standard is supported.
Figure 1. Selecting Your Encryption Algorithm During Installation
After the CyberAngel finished installing, we began testing the secure protected drive by inserting some would-be confidential information (a spreadsheet called PatientRecords.xls), to see if an unauthorized user could access it. To pose as an unauthorized user, we rebooted the system, and failed to provide the correct logon password after reboot. The secure drive was not visible in any way, and when we poked around on the laptop to try to find it, we couldn't find any signs of it, or the spreadsheet dubbed PatientRecords.xls. We then rebooted the system and inserted the correct password, and voila, our secure drive and spreadsheet was back. Between when we inserted the wrong password, rebooted, and inserted the right password, an alert had already been e-mailed to us notifying us that someone had attempted to use the test laptop without proper authorization. We were sent the 24 x 7, 800 number to call at the CyberAngel Security Monitoring Center if we suspected that the laptop had been stolen.
When the alert e-mail was mailed to us, it included a "Created" timestamp, but not a "Sent" timestamp. We're not sure why the CyberAngel monitoring server did not register a "Sent" timestamp with the messaging server, however, in the body of the e-mail, it did include a correct timestamp of the unauthorized access. This seems to be a problem that is trivial at best, though we'd like to see it fixed in the next version.
When using the secure drive, you need to actually "move" your files into the drive to make them secure. Leaving a copy of the file on your insecure drive will defeat the purpose of using the secure drive. For documents that you'd like to keep secret, you'll have to be sure that temporary and recovery files are also kept in the secure
drive. For Microsoft Word or Excel, this is easy enough to do by going into the Tools ? Options menu and modifying the default path for the AutoRecover and Documents directories.
Table 1. Corporate Information
Vendor CyberAngel Security Solutions, Inc.
Headquarters 475 Metroplex Drive, Suite 104, Nashville, TN 37211
Product The CyberAngel
Customer Scope Financial, Government Agencies, Medical Establishments
Industry Focus Security for laptops and confidential information
Key Features Laptop recovery software, secure encrypted drive, 24 x 7 unauthorized access alert service, configuration manager
Web site http://www.thecyberangel.com
Contact Information 800-501-4344
The user documentation also provides instructions on how to modify your Outlook preferences so that you can move all of your e-mail to the secure drive. Even if you don't anticipate your laptop getting stolen, it's sure nice to know that your email is secure, encrypted, and not accessible unless you know the password to unlock the secure drive. Securing e-mail encrypted was a pleasant surprise since it was not a feature we were expecting to see.
You can secure applications, such as a VPN client, by moving them into the secure drive. By moving applications into the secure drive, if an unauthorized user fails to authenticate properly, they do not even see that the application exists on that computer. Applications can also be installed directly on the secure drive.
Figure 2. The CyberAngel Configuration Manager
Though it's not possible for you to configure the alerts to be sent to a second e-mail address yourself, we were advised by CSS, Inc. that this can be setup by calling the CyberAngel Security Monitoring Center. Users may want to setup the alerts to be sent to a cell phone as well as a traditional e-mail account, additional notification paths can be added or changed by calling the CyberAngel Security Monitoring Center. If the laptop contains classified information, the alert could be sent to a U.S. Federal Agency's Computer Security Incident Response Center (CSIRC). We tested the port locking feature by inserting a wrong password into the password authentication box and then proceeded to try to HotSync some data to a Palm Pilot. The password violation blocked all the COM ports preventing the HotSync from taking place. The port locking feature also prevented us from initiating outgoing communications lines. However, in stealth mode, the CyberAngel initiated a call back to the recovery server to alert it of the laptop's geographic location verifying that COM ports are locked to unauthorized users, but not to the CyberAngel recovery software.
Recommendations
The CyberAngel has evolved into much more than laptop recovery software and works as advertised. You can secure documents, applications, and even your e-mail. You can prevent unauthorized remote access to servers or accounts, and restrict information transfer to PDAs or handhelds. Medical establishments that need to protect patient information as required by the Health Information Portability and Accountability Act (HIPAA) will find the CyberAngel to be an easy HIPAA compliance solution to deploy on laptops. U.S. Federal Agencies can prevent embarrassing losses of laptops by deploying the CyberAngel, and can also develop new security policies around this product by articulating that confidential data be stored on the secure drive. Agencies working on complying with the Federal Information Security Management Act (FISMA) will find the CyberAngel potentially useful. Financial institutions also have the capability to comply with the privacy regulations related to the Gramm-Leach-Bliley Act (GLBA) using the CyberAngel.
It would be great if in the next version, the CyberAngel came with documentation targeted specifically for HIPAA, FISMA, and GLBA end-users with specific examples on what information to put on the secure drive. It seems that there is a lot of potential to use the CyberAngel to comply with these information security laws, however without focused documentation on HIPAA, FISMA, and GLBA, some users may not see the potential at first glance.
One license will cost you $59.95, and volume discounts apply for packages of multiple licenses. CyberAngel Security Solutions, Inc. will also apply a 10 percent discount for U.S. government agencies and 20 percent discount for educational institutions and non-profit organizations.
According to the Computer Security Institute's 2003 Computer Crime and Security Survey, theft of private or proprietary information created the greatest financial losses for the survey respondents. If you are a medical institution, government agency, or financial institution, information theft can result in violation of patient privacy regulations, loss of customer credit card numbers, unauthorized financial transactions, or disclosure of national security secrets.
While all computers are vulnerable to information theft, laptops are particularly vulnerable due to their portability and ease of theft. Most servers are locked in racks in data centers, however laptops are typically left out on desks where access is easy. If an office visitor walked out of the office with a laptop under his or her arm, an unknowing receptionist would likely expect that it was the visitor's own laptop and not question it. If your laptop was stolen, you'd want it back. The CyberAngel, made by CyberAngel Security Solutions (CSS), is a product that claims to locate stolen laptops and return them to you. Their recovery rate on returning stolen and lost laptops to folks who have licensed their software is 88 percent. Relevant Technologies took the CyberAngel into our labs to see if version 3.0 qualified for our acceptability rating.
Installation and Use
The CyberAngel was easy to install, and the entire installation took less than ten minutes, including the time it took to reboot the test system. With version 3.0, the CyberAngel includes a new stealthy, secure drive that is protected by strong encryption. The secure drive is a logical drive protected by strong encryption where you can put all your confidential and classified information. During the installation process, you are prompted to select an encryption algorithm to use to protect your secure drive. The choices available are:
* Rijndael 128 bit
* Rijndael 256 bit
* Blowfish 128 bit
* Blowfish 448 bit
* Twofish 128 bit
* Twofish 256 bit
* DES 128
* DES 56
The nice thing about the installation program is that it provides you with background information on each of the encryption algorithms to better assist you in making your decision on which one to select. Government agencies will like the fact that the NIST AES standard is supported.
Figure 1. Selecting Your Encryption Algorithm During Installation
After the CyberAngel finished installing, we began testing the secure protected drive by inserting some would-be confidential information (a spreadsheet called PatientRecords.xls), to see if an unauthorized user could access it. To pose as an unauthorized user, we rebooted the system, and failed to provide the correct logon password after reboot. The secure drive was not visible in any way, and when we poked around on the laptop to try to find it, we couldn't find any signs of it, or the spreadsheet dubbed PatientRecords.xls. We then rebooted the system and inserted the correct password, and voila, our secure drive and spreadsheet was back. Between when we inserted the wrong password, rebooted, and inserted the right password, an alert had already been e-mailed to us notifying us that someone had attempted to use the test laptop without proper authorization. We were sent the 24 x 7, 800 number to call at the CyberAngel Security Monitoring Center if we suspected that the laptop had been stolen.
When the alert e-mail was mailed to us, it included a "Created" timestamp, but not a "Sent" timestamp. We're not sure why the CyberAngel monitoring server did not register a "Sent" timestamp with the messaging server, however, in the body of the e-mail, it did include a correct timestamp of the unauthorized access. This seems to be a problem that is trivial at best, though we'd like to see it fixed in the next version.
When using the secure drive, you need to actually "move" your files into the drive to make them secure. Leaving a copy of the file on your insecure drive will defeat the purpose of using the secure drive. For documents that you'd like to keep secret, you'll have to be sure that temporary and recovery files are also kept in the secure
drive. For Microsoft Word or Excel, this is easy enough to do by going into the Tools ? Options menu and modifying the default path for the AutoRecover and Documents directories.
Table 1. Corporate Information
Vendor CyberAngel Security Solutions, Inc.
Headquarters 475 Metroplex Drive, Suite 104, Nashville, TN 37211
Product The CyberAngel
Customer Scope Financial, Government Agencies, Medical Establishments
Industry Focus Security for laptops and confidential information
Key Features Laptop recovery software, secure encrypted drive, 24 x 7 unauthorized access alert service, configuration manager
Web site http://www.thecyberangel.com
Contact Information 800-501-4344
The user documentation also provides instructions on how to modify your Outlook preferences so that you can move all of your e-mail to the secure drive. Even if you don't anticipate your laptop getting stolen, it's sure nice to know that your email is secure, encrypted, and not accessible unless you know the password to unlock the secure drive. Securing e-mail encrypted was a pleasant surprise since it was not a feature we were expecting to see.
You can secure applications, such as a VPN client, by moving them into the secure drive. By moving applications into the secure drive, if an unauthorized user fails to authenticate properly, they do not even see that the application exists on that computer. Applications can also be installed directly on the secure drive.
Figure 2. The CyberAngel Configuration Manager
Though it's not possible for you to configure the alerts to be sent to a second e-mail address yourself, we were advised by CSS, Inc. that this can be setup by calling the CyberAngel Security Monitoring Center. Users may want to setup the alerts to be sent to a cell phone as well as a traditional e-mail account, additional notification paths can be added or changed by calling the CyberAngel Security Monitoring Center. If the laptop contains classified information, the alert could be sent to a U.S. Federal Agency's Computer Security Incident Response Center (CSIRC). We tested the port locking feature by inserting a wrong password into the password authentication box and then proceeded to try to HotSync some data to a Palm Pilot. The password violation blocked all the COM ports preventing the HotSync from taking place. The port locking feature also prevented us from initiating outgoing communications lines. However, in stealth mode, the CyberAngel initiated a call back to the recovery server to alert it of the laptop's geographic location verifying that COM ports are locked to unauthorized users, but not to the CyberAngel recovery software.
Recommendations
The CyberAngel has evolved into much more than laptop recovery software and works as advertised. You can secure documents, applications, and even your e-mail. You can prevent unauthorized remote access to servers or accounts, and restrict information transfer to PDAs or handhelds. Medical establishments that need to protect patient information as required by the Health Information Portability and Accountability Act (HIPAA) will find the CyberAngel to be an easy HIPAA compliance solution to deploy on laptops. U.S. Federal Agencies can prevent embarrassing losses of laptops by deploying the CyberAngel, and can also develop new security policies around this product by articulating that confidential data be stored on the secure drive. Agencies working on complying with the Federal Information Security Management Act (FISMA) will find the CyberAngel potentially useful. Financial institutions also have the capability to comply with the privacy regulations related to the Gramm-Leach-Bliley Act (GLBA) using the CyberAngel.
It would be great if in the next version, the CyberAngel came with documentation targeted specifically for HIPAA, FISMA, and GLBA end-users with specific examples on what information to put on the secure drive. It seems that there is a lot of potential to use the CyberAngel to comply with these information security laws, however without focused documentation on HIPAA, FISMA, and GLBA, some users may not see the potential at first glance.
One license will cost you $59.95, and volume discounts apply for packages of multiple licenses. CyberAngel Security Solutions, Inc. will also apply a 10 percent discount for U.S. government agencies and 20 percent discount for educational institutions and non-profit organizations.
Program Testing Methodology Part One: Preparing for Testing
ntroduction
Before any system can be completely implemented on a production computer, the analysts and programmers working on the system must be able to state unequivocally that the programs work exactly as they were designed to work and that any errors or mistakes found in any of the data to be processed by the system will be handled properly. Since testing is quite unpredictable in terms of the results and, in some cases, the availability of the hardware required for testing, it is difficult to establish a day-to-day detailed testing schedule in advance. It should be possible, however, to estimate with some accuracy the time which will be required to test any given program. The most likely area for the "slipping" of deadlines in the implementation falls within the area of program and systems testing. When testing schedules are established, there should be adequate time allocated for testing.
To effectively test a program, the systems analyst should establish procedures which are to be followed. Basic rules for program testing and debugging that should be followed are summarized below.
1. Individual programs should be compiled and all diagnostics removed prior to using test data.
2. Test data should be created by the programmer that first tests all main routines.
3. Additional test data should be created to assure that every routine and every instruction has been executed correctly at least once.
4. Program testing should include testing with as many types of invalid data as is likely to occur when the system is in production.
5. After each program has been individually tested and debugged, related programs in the system should be tested as a unified group. This is called "link" or "string" testing.
6. A final "systems test" should be performed using data prepared by the systems analyst and, in some cases, data which has been previously processed through the "old" system.
This is Part One of a two-part note.
Part One will discuss the roles of programmers and analysts during testing; how to test individual programs; and what type of test data should be created to ensure a successful system implementation.
Part Two will discuss the modes of testing and management and user approval.
Testing Individual Programs
The testing of each individual program, called unit testing, should be handled by the programmer who has written the program. The amount of testing which is required to certify that a program is ready for production and the amount of reliability which can be given to a program can be controversial. However, from a programmer's standpoint, a program should never enter systems testing and be put into production if the programmer has any doubt that the program will work.
Programs should first be compiled without using test data in order to eliminate all diagnostics in the program due to programming errors.
After a "clean" compilation is obtained, that is, one without any compilation diagnostics, the programmer should then desk check his or her program. Desk-checking refers to the process of "playing computer" with the source code/listing, that is, following every step in the program and analyzing what will take place as the routine is processed. Desk-checking is probably the most useful tool in debugging a program and it is the most neglected and abused. Many programmers, immediately upon obtaining a compilation with no diagnostics, resubmit the program for a test run with test data. This is not a good testing technique as time should be taken to review the source listing.
Desk-checking has an added benefit of re-familiarizing the programmer with the program. In a complex program, there may be a period of ten or more weeks between the time the program is started and the time it is compiled. During this time the programmer could forget some of the routines or other portions of the program which were written. When desk-checking, however, the programmer must go completely through these routines again. Thus, the routines will be refreshed in the programmer's mind. This can be of great aid if the program fails because the programmer is more likely to be able to isolate the problem faster and with more accuracy than if he or she had not reviewed the program in detail.
Creating Test Data
After the program has been desk-checked, it must be tested on the computer using test data. To properly test the program there must be good test data available. In all applications, the programmer should use data to test the program which has been designed specifically to test the routines within the program. Test data should be designed to test main routines first. When it is found that the main routines produce the desired output, additional test data should be created that tests all other routines. This data should contain both valid and invalid data to test both the normal processing routines and the error routines of the program. In addition, the test data should be designed so that limits within the program are tested. For example, data should contain both minimum and maximum values which can be contained within a field. The data should be designed to allow maximum values and minimum values to occur in any intermediate fields which are to contain totals. There should be variations in the various formats which the program can process so that all possibilities can be covered. All of the codes which can be used in records should be contained in the test data so that the various routines which are called based upon different codes can be tested.
Another important area which must be tested is the files which are to be used in the program. If an indexed file is used, the file must be loaded and, in addition, must be tested using data to add records and delete and change the records within the file. When a direct file is used, the algorithm used to determine record addresses must be tested and also the routine which handles synonyms must be heavily tested. Any time data is stored on a file, such as disk, CD, or cartridge, whether it is relational, sequential, indexed sequential, or direct, the data should be "dumped" by using a utility program so that the file contents can be examined in detail. A programmer cannot assume that the data is correct because the file was built or that the data on the file was used successfully as input to another program. The data must be closely examined.
Most of the responsibility for program testing rests with the programmer who wrote the program. The programmer should design the test data, conduct the tests, and check the output from the tests. The analyst, however, can play an important role in program testing by first attempting to ensure that good testing techniques are followed and then reviewing and making suggestions to the programmer concerning data to be tested. The analyst can look at on-line screens, reports, and file dumps to ensure, early in the program testing, that the results correspond to what is expected. If there are variances, they can be corrected. The analyst should look at the test data which is being used to determine if he or she sees any areas which should be tested and which have been overlooked by the programmer. The analyst should not dictate to the programmer what data should be used to test the program. The analyst serves in an advisory capacity. The only time this may not be true is if the programmer is having difficulty with the program.
Summary
To effectively test a program, the systems analyst should establish procedures for testing and debugging. The programmer who wrote the program should then be responsible for conducting unit tests, desk-checking, and creating the test data. With clearly defined roles, this critical aspect of implementation can be handled successfully.
This is Part One of a two-part note.
Part One will discuss the roles of programmers and analysts during testing, how to test individual programs, and what type of test data should be created to ensure a successful system implementation.
Before any system can be completely implemented on a production computer, the analysts and programmers working on the system must be able to state unequivocally that the programs work exactly as they were designed to work and that any errors or mistakes found in any of the data to be processed by the system will be handled properly. Since testing is quite unpredictable in terms of the results and, in some cases, the availability of the hardware required for testing, it is difficult to establish a day-to-day detailed testing schedule in advance. It should be possible, however, to estimate with some accuracy the time which will be required to test any given program. The most likely area for the "slipping" of deadlines in the implementation falls within the area of program and systems testing. When testing schedules are established, there should be adequate time allocated for testing.
To effectively test a program, the systems analyst should establish procedures which are to be followed. Basic rules for program testing and debugging that should be followed are summarized below.
1. Individual programs should be compiled and all diagnostics removed prior to using test data.
2. Test data should be created by the programmer that first tests all main routines.
3. Additional test data should be created to assure that every routine and every instruction has been executed correctly at least once.
4. Program testing should include testing with as many types of invalid data as is likely to occur when the system is in production.
5. After each program has been individually tested and debugged, related programs in the system should be tested as a unified group. This is called "link" or "string" testing.
6. A final "systems test" should be performed using data prepared by the systems analyst and, in some cases, data which has been previously processed through the "old" system.
This is Part One of a two-part note.
Part One will discuss the roles of programmers and analysts during testing; how to test individual programs; and what type of test data should be created to ensure a successful system implementation.
Part Two will discuss the modes of testing and management and user approval.
Testing Individual Programs
The testing of each individual program, called unit testing, should be handled by the programmer who has written the program. The amount of testing which is required to certify that a program is ready for production and the amount of reliability which can be given to a program can be controversial. However, from a programmer's standpoint, a program should never enter systems testing and be put into production if the programmer has any doubt that the program will work.
Programs should first be compiled without using test data in order to eliminate all diagnostics in the program due to programming errors.
After a "clean" compilation is obtained, that is, one without any compilation diagnostics, the programmer should then desk check his or her program. Desk-checking refers to the process of "playing computer" with the source code/listing, that is, following every step in the program and analyzing what will take place as the routine is processed. Desk-checking is probably the most useful tool in debugging a program and it is the most neglected and abused. Many programmers, immediately upon obtaining a compilation with no diagnostics, resubmit the program for a test run with test data. This is not a good testing technique as time should be taken to review the source listing.
Desk-checking has an added benefit of re-familiarizing the programmer with the program. In a complex program, there may be a period of ten or more weeks between the time the program is started and the time it is compiled. During this time the programmer could forget some of the routines or other portions of the program which were written. When desk-checking, however, the programmer must go completely through these routines again. Thus, the routines will be refreshed in the programmer's mind. This can be of great aid if the program fails because the programmer is more likely to be able to isolate the problem faster and with more accuracy than if he or she had not reviewed the program in detail.
Creating Test Data
After the program has been desk-checked, it must be tested on the computer using test data. To properly test the program there must be good test data available. In all applications, the programmer should use data to test the program which has been designed specifically to test the routines within the program. Test data should be designed to test main routines first. When it is found that the main routines produce the desired output, additional test data should be created that tests all other routines. This data should contain both valid and invalid data to test both the normal processing routines and the error routines of the program. In addition, the test data should be designed so that limits within the program are tested. For example, data should contain both minimum and maximum values which can be contained within a field. The data should be designed to allow maximum values and minimum values to occur in any intermediate fields which are to contain totals. There should be variations in the various formats which the program can process so that all possibilities can be covered. All of the codes which can be used in records should be contained in the test data so that the various routines which are called based upon different codes can be tested.
Another important area which must be tested is the files which are to be used in the program. If an indexed file is used, the file must be loaded and, in addition, must be tested using data to add records and delete and change the records within the file. When a direct file is used, the algorithm used to determine record addresses must be tested and also the routine which handles synonyms must be heavily tested. Any time data is stored on a file, such as disk, CD, or cartridge, whether it is relational, sequential, indexed sequential, or direct, the data should be "dumped" by using a utility program so that the file contents can be examined in detail. A programmer cannot assume that the data is correct because the file was built or that the data on the file was used successfully as input to another program. The data must be closely examined.
Most of the responsibility for program testing rests with the programmer who wrote the program. The programmer should design the test data, conduct the tests, and check the output from the tests. The analyst, however, can play an important role in program testing by first attempting to ensure that good testing techniques are followed and then reviewing and making suggestions to the programmer concerning data to be tested. The analyst can look at on-line screens, reports, and file dumps to ensure, early in the program testing, that the results correspond to what is expected. If there are variances, they can be corrected. The analyst should look at the test data which is being used to determine if he or she sees any areas which should be tested and which have been overlooked by the programmer. The analyst should not dictate to the programmer what data should be used to test the program. The analyst serves in an advisory capacity. The only time this may not be true is if the programmer is having difficulty with the program.
Summary
To effectively test a program, the systems analyst should establish procedures for testing and debugging. The programmer who wrote the program should then be responsible for conducting unit tests, desk-checking, and creating the test data. With clearly defined roles, this critical aspect of implementation can be handled successfully.
This is Part One of a two-part note.
Part One will discuss the roles of programmers and analysts during testing, how to test individual programs, and what type of test data should be created to ensure a successful system implementation.
Point of Sale: To Stand Alone or Not?
Introduction
When evaluating a point of sale (POS) solution, there are generally two approaches: best-of-breed solutions, and integrated solutions. Both have strengths and weaknesses, according to the information technology (IT) infrastructure. Retailers that have an existing back-office system should evaluate whether it is better to replace their legacy system or to choose a best-of breed solution.
For retailers that have neither a back-office system nor a legacy POS system, the question is, should they purchase a stand-alone POS system or not? In deciding between a POS system that is stand-alone, and one that is not, the organization must first understand what a POS system is. A POS system, also known as a point-of-purchase system, is composed of two main parts: software, and hardware.
It will be helpful to first provide an overview of the core and non-core areas of a POS software system, as well as a brief definition of the POS hardware component. This will help to determine whether a stand-alone POS solution should or should not be purchased.
Core Areas of POS Systems Software
Due to the diversity of the retail industry, different POS system features are required for different types of retailers. In assessing these features, the following have emerged as the six best practices core components, or must-have features, regardless of the intended application of the POS system.
1. Transaction management: The transaction management component includes all the information required to complete a transaction. This component should capture key transaction data, such as sales, sales cancellations, voids, refunds, purchase of gift certificates, layaways, service transactions, creation of special orders, and the like. The transaction management component should validate item information, automatically calculate the total purchase amount, and process the payments. This enables sales associates to give their full attention to properly serving the customer, since processing a sale would then only require them to scan in the barcode and to ask the method of payment.
2. Price management: The price management component allows a store manager or store employee to modify the retail price of an item. POS systems should allow modification of a retail price for different reasons, such as discounts on damaged items, discounts after negotiations, or competitive price matching. The price management module should track these retail price changes, by assigning a code to the reason, or total discount, or employee number, and so on. This module should have the capacity to generate a report for auditing purposes.
3. Register management: The register management component includes processes for cash opening procedures, cash closing procedures, and cash balancing procedures. Moreover, this module consists of the management of register opening funds, paid-in transactions, paid-out transactions, tenders, currencies, and taxes. Register management should track the cash flow within the business day, and should flag any unusual events. This enables a store manager to monitor and reduce employee theft.
4. Inventory management: The inventory management component includes item localization tools, physical inventory procedures, and inventory adjustments. This ensures that the store's inventory is up to date. It also helps employees to locate items at the store or corporate level. In other words, by knowing where the inventory is located and by having accurate information about the quantity on hand, this component allows employees to close sales and to increase customer service and satisfaction.
5. Customer relationship management: The customer relationship management (CRM) component has the functionality to manage customer interactions, customer sales histories, customer contact information, customer preferences, customer characteristics, customer loyalty programs, and so forth. For a retailer, customer purchases are the most important avenue of revenue. To make things more challenging, today's customers are more educated, more skeptical, and more demanding than before. With the advent of the Internet, price transparency has become a major threat to retailers. Thus, offering a personalized service to customers is crucial. Having a good CRM module which tracks customer behavior and preferences will ensure healthy relationships. For more information about CRM, see Comparing On Demand Customer Relationship Management Service Alternatives.
6. Reports and inquiries: Store employees use this component daily, to extract information on inventory, sales summaries, or commissions (if applicable). Reports and inquiries enable organizations to analyze the performance of the store by day, by week, by month, or even by year. It also shows the performance of items on numerous levels (such as color, dimension, size, characteristics, or attributes). Reports and inquiries also allow store managers to identify anomalies and to take corrective action if necessary. Reports and inquiries are widely used to obtain loss and prevention information.
Non-core Areas of POS Systems
Now that we have determined the components of a POS system that are essential regardless of the type and size of the retailer, let's continue by exploring the available features that are not essential to every system.
1. Purchase orders: The purchase order feature enables buyers to communicate a purchase to vendors, and to receive the goods ordered. A merchandise management system (MMS) or stand-alone POS system, however, requires the ability to order and receive purchase orders (POs). POS systems which are integrated with a retail merchandising system only need the capability to process a receipt. The purchase order module from an MMS offers more functionality, such as different types of POs, automatic creation of POs, or the ability to add vendor discounts at an item level. On the other hand, the PO component from a POS system will allow simple ordering and receiving functionalities.
2. Price changes: The price change feature is used to manage the retail (selling) price of goods. This feature can offer tools for lowering or raising the retail price. A POS price change component allows permanent or temporary markdowns and markups. The price change module included in the retail merchandising system, on the other hand, offers multipricing capabilities, markdown and markup cancellations, or price changes at location, department class, and vendor levels. Due to the increase of awareness among customers, prices on products must be equitable; they cannot be higher than the competitor, but they cannot be lower than the cost. Moreover, to lessen the loss, markdowns allow retailers to liquidate discontinued or out of fashion products.
3. Financials: The financials component is not considered a core element of POS systems. However, all vendors must at least have the means to communicate with a third-party financial system. This component includes general ledger, fixed assets, cost accounting, cash management, budgeting, accounts payable, reporting, and other bookkeeping requirements. For more information about financials, see Customer Choices for Achieving Growth.
In addition to the non-core components mentioned above, other features such as replenishment and e-commerce capabilities can be offered in certain POS Systems, but are usually found in a merchandising solution. Note once again that when replenishment is offered in a POS system, the capabilities are not as extensive as when the same module is found in a retail system. Moreover, other components such as planning and forecasting, allocation and distribution, open-to-buy, and stock optimization can be included within a retail system. These are all features that ease merchandise process analysis, increase return on investment (ROI), and increase employee productivity. For more information about merchandising systems, see Retail Systems: A Primer.
POS Hardware
As mentioned earlier, a POS System is composed of software and hardware components. There are two types of POS systems that are available on the market: electronic cash registers (ECR), and computer-based POS systems. An ECR will only have the capability to accumulate the total sales transaction amounts, whereas a computer based POS system allows more extensive features, due to its software. The devices in a computer based POS hardware system typically include a monitor, a cash drawer, a keyboard, a mouse, a receipt printer, and sometimes a barcode scanner. Compared to the cash register, a computer based POS system allows retailers to compute more extensive sales analysis, track "hot items," or track customer preferences, all with only a few clicks.
In addition to the typical computer based POS system, other hardware components are available, such as magnetic stripe readers, conveyor belts, personal shopping assistant (PSA) devices, pole displays, or in-counter scanner or weight scales. These optional devices reduce the time spent in serving a customer. For example, a pole display informs customers of the total amount, to encourage them to quickly have payment ready. Moreover, recent technologies also include devices which use biometric identification. In the near future, customers will be able to pay for purchases with literally one touch. All these hardware devices are tools used to increase customer satisfaction and to ensure their loyalty.
When evaluating a point of sale (POS) solution, there are generally two approaches: best-of-breed solutions, and integrated solutions. Both have strengths and weaknesses, according to the information technology (IT) infrastructure. Retailers that have an existing back-office system should evaluate whether it is better to replace their legacy system or to choose a best-of breed solution.
For retailers that have neither a back-office system nor a legacy POS system, the question is, should they purchase a stand-alone POS system or not? In deciding between a POS system that is stand-alone, and one that is not, the organization must first understand what a POS system is. A POS system, also known as a point-of-purchase system, is composed of two main parts: software, and hardware.
It will be helpful to first provide an overview of the core and non-core areas of a POS software system, as well as a brief definition of the POS hardware component. This will help to determine whether a stand-alone POS solution should or should not be purchased.
Core Areas of POS Systems Software
Due to the diversity of the retail industry, different POS system features are required for different types of retailers. In assessing these features, the following have emerged as the six best practices core components, or must-have features, regardless of the intended application of the POS system.
1. Transaction management: The transaction management component includes all the information required to complete a transaction. This component should capture key transaction data, such as sales, sales cancellations, voids, refunds, purchase of gift certificates, layaways, service transactions, creation of special orders, and the like. The transaction management component should validate item information, automatically calculate the total purchase amount, and process the payments. This enables sales associates to give their full attention to properly serving the customer, since processing a sale would then only require them to scan in the barcode and to ask the method of payment.
2. Price management: The price management component allows a store manager or store employee to modify the retail price of an item. POS systems should allow modification of a retail price for different reasons, such as discounts on damaged items, discounts after negotiations, or competitive price matching. The price management module should track these retail price changes, by assigning a code to the reason, or total discount, or employee number, and so on. This module should have the capacity to generate a report for auditing purposes.
3. Register management: The register management component includes processes for cash opening procedures, cash closing procedures, and cash balancing procedures. Moreover, this module consists of the management of register opening funds, paid-in transactions, paid-out transactions, tenders, currencies, and taxes. Register management should track the cash flow within the business day, and should flag any unusual events. This enables a store manager to monitor and reduce employee theft.
4. Inventory management: The inventory management component includes item localization tools, physical inventory procedures, and inventory adjustments. This ensures that the store's inventory is up to date. It also helps employees to locate items at the store or corporate level. In other words, by knowing where the inventory is located and by having accurate information about the quantity on hand, this component allows employees to close sales and to increase customer service and satisfaction.
5. Customer relationship management: The customer relationship management (CRM) component has the functionality to manage customer interactions, customer sales histories, customer contact information, customer preferences, customer characteristics, customer loyalty programs, and so forth. For a retailer, customer purchases are the most important avenue of revenue. To make things more challenging, today's customers are more educated, more skeptical, and more demanding than before. With the advent of the Internet, price transparency has become a major threat to retailers. Thus, offering a personalized service to customers is crucial. Having a good CRM module which tracks customer behavior and preferences will ensure healthy relationships. For more information about CRM, see Comparing On Demand Customer Relationship Management Service Alternatives.
6. Reports and inquiries: Store employees use this component daily, to extract information on inventory, sales summaries, or commissions (if applicable). Reports and inquiries enable organizations to analyze the performance of the store by day, by week, by month, or even by year. It also shows the performance of items on numerous levels (such as color, dimension, size, characteristics, or attributes). Reports and inquiries also allow store managers to identify anomalies and to take corrective action if necessary. Reports and inquiries are widely used to obtain loss and prevention information.
Non-core Areas of POS Systems
Now that we have determined the components of a POS system that are essential regardless of the type and size of the retailer, let's continue by exploring the available features that are not essential to every system.
1. Purchase orders: The purchase order feature enables buyers to communicate a purchase to vendors, and to receive the goods ordered. A merchandise management system (MMS) or stand-alone POS system, however, requires the ability to order and receive purchase orders (POs). POS systems which are integrated with a retail merchandising system only need the capability to process a receipt. The purchase order module from an MMS offers more functionality, such as different types of POs, automatic creation of POs, or the ability to add vendor discounts at an item level. On the other hand, the PO component from a POS system will allow simple ordering and receiving functionalities.
2. Price changes: The price change feature is used to manage the retail (selling) price of goods. This feature can offer tools for lowering or raising the retail price. A POS price change component allows permanent or temporary markdowns and markups. The price change module included in the retail merchandising system, on the other hand, offers multipricing capabilities, markdown and markup cancellations, or price changes at location, department class, and vendor levels. Due to the increase of awareness among customers, prices on products must be equitable; they cannot be higher than the competitor, but they cannot be lower than the cost. Moreover, to lessen the loss, markdowns allow retailers to liquidate discontinued or out of fashion products.
3. Financials: The financials component is not considered a core element of POS systems. However, all vendors must at least have the means to communicate with a third-party financial system. This component includes general ledger, fixed assets, cost accounting, cash management, budgeting, accounts payable, reporting, and other bookkeeping requirements. For more information about financials, see Customer Choices for Achieving Growth.
In addition to the non-core components mentioned above, other features such as replenishment and e-commerce capabilities can be offered in certain POS Systems, but are usually found in a merchandising solution. Note once again that when replenishment is offered in a POS system, the capabilities are not as extensive as when the same module is found in a retail system. Moreover, other components such as planning and forecasting, allocation and distribution, open-to-buy, and stock optimization can be included within a retail system. These are all features that ease merchandise process analysis, increase return on investment (ROI), and increase employee productivity. For more information about merchandising systems, see Retail Systems: A Primer.
POS Hardware
As mentioned earlier, a POS System is composed of software and hardware components. There are two types of POS systems that are available on the market: electronic cash registers (ECR), and computer-based POS systems. An ECR will only have the capability to accumulate the total sales transaction amounts, whereas a computer based POS system allows more extensive features, due to its software. The devices in a computer based POS hardware system typically include a monitor, a cash drawer, a keyboard, a mouse, a receipt printer, and sometimes a barcode scanner. Compared to the cash register, a computer based POS system allows retailers to compute more extensive sales analysis, track "hot items," or track customer preferences, all with only a few clicks.
In addition to the typical computer based POS system, other hardware components are available, such as magnetic stripe readers, conveyor belts, personal shopping assistant (PSA) devices, pole displays, or in-counter scanner or weight scales. These optional devices reduce the time spent in serving a customer. For example, a pole display informs customers of the total amount, to encourage them to quickly have payment ready. Moreover, recent technologies also include devices which use biometric identification. In the near future, customers will be able to pay for purchases with literally one touch. All these hardware devices are tools used to increase customer satisfaction and to ensure their loyalty.
Salary.com Wins Talent Management Shootout at
This year, I had the honor of attending the 12th Annual Human Resources (HR) Technology Conference held at McCormick Place in Chicago, Illinois (US). While many of the events at the three-day conference piqued my interest, none did so more than the 2nd Annual Talent Management Shootout. This shootout reminded me of TEC’s very own shootouts and showdowns, done several times throughout the year. While our shootouts are a little less “extravagant” (in the sense that we don’t have the players live on stage), we still find them to be highly effective in allowing our readers make better-informed decisions about the software they choose.
The 13th shootout—one of the Conference’s signature events—took place in the Grand Ballroom and was witnessed by approximately 600 attendees who had the opportunity to vote on the vendors’ performances. Interestingly, employees of the participating vendors who entered the ballroom were corralled into a separate seating area at the front of the stage where they were prohibited from voting.
Participants in the 2nd Annual Talent Management Shootout
At this year’s Talent Management Shootout, two enterprise resource planning (ERP) software vendors—Lawson and SAP—went up against two talent management suite vendors—Plateau and Salary.com.
Each vendor was given three scripted scenarios of problems—challenges that many HR managers face today. The demo scripts were authored by HR Technology Conference co-chair Bill Kutik, and co-authored by Leighanne Levensaler, director of talent management research for Bersin & Associates.
Prior to the event, an e-mail was sent out to each of approximately 30 vendors. This particular event was organized and hosted by Bill Kutik. Vendors had an opportunity to participate in the event (if they chose to do so). While eight vendors were ultimately eligible to be candidates, the final four shootout contestants were chosen at random.
Some of the areas that were covered in the demonstrations were:
• employee profile
• competency management
• employee development
• goal management
• career planning
• performance management
• compensation planning
• succession planning
Vendor Overview
Lawson
Shootout scenarios lead by Larry Dunivan, senior vice president (SVP) Global HCM Products, Lawson
Lawson Human Capital Management (HCM) helps HR contribute to organizational excellence with applications that support business operations. By automating administrative processes, the solution helps HR increase efficiency, allowing focus to be placed on more critical initiatives. Lawson offers a stand-alone HCM suite or an integrated ERP system, so HR organizations can align people and processes, all with a low total cost of ownership (TCO).
Plateau
Shootout scenarios lead by Paul Sparta, chief executive officer (CEO), Plateau
Plateau Talent Management helps HR leaders develop, manage, reward, and optimize organizational talent. Its Talent Management Suite includes learning, performance, compensation, and career and succession planning modules which can be deployed independently or together as an integrated talent management solution.
Salary.com
Shootout scenarios lead by Kent Plunkett, CEO, Salary.com
Salary.com is a provider of on-demand talent management, compensation, and payroll solutions. Its software applications, proprietary data, and consulting services help HR and compensation professionals automate, streamline, and optimize critical core HR process such as payroll, benefits, and HR administration as well as talent management processes such as learning and development, compensation planning, performance management, competency management, and succession planning.
SAP
Shootout scenarios lead by David Ludlow, vice president (VP) Suite Solution Management, SAP
The SAP ERP Human Capital Management solution is a integrated human resources management solution that automates all core processes, such as employee administration, including talent management, workforce process management and deployment, and legal reporting.
And the Winner Is…
Niche player Salary.com was the clear winner in all three scenarios—winning out over tier-one vendors Lawson and SAP, as well as fellow niche player Plateau.
Final Note
Industry experts agree that Salary.com will be the vendor to watch out for over the next few years. When asked which vendors had the most complete and integrated offering, Naomi Lee Bloom, managing partner at Bloom & Wallace, mentioned that along with Softscape and Success Factors, Salary.com was a good stand-alone solution for managing talent. From my point of view as a TEC research analyst, based on the demonstration of the product during the shootout, Salary.com’s HR solution was very user-friendly and highly configurable to the organization’s needs, and made it very easy for HR administrators to align employee objectives with the company’s goals.
The 13th shootout—one of the Conference’s signature events—took place in the Grand Ballroom and was witnessed by approximately 600 attendees who had the opportunity to vote on the vendors’ performances. Interestingly, employees of the participating vendors who entered the ballroom were corralled into a separate seating area at the front of the stage where they were prohibited from voting.
Participants in the 2nd Annual Talent Management Shootout
At this year’s Talent Management Shootout, two enterprise resource planning (ERP) software vendors—Lawson and SAP—went up against two talent management suite vendors—Plateau and Salary.com.
Each vendor was given three scripted scenarios of problems—challenges that many HR managers face today. The demo scripts were authored by HR Technology Conference co-chair Bill Kutik, and co-authored by Leighanne Levensaler, director of talent management research for Bersin & Associates.
Prior to the event, an e-mail was sent out to each of approximately 30 vendors. This particular event was organized and hosted by Bill Kutik. Vendors had an opportunity to participate in the event (if they chose to do so). While eight vendors were ultimately eligible to be candidates, the final four shootout contestants were chosen at random.
Some of the areas that were covered in the demonstrations were:
• employee profile
• competency management
• employee development
• goal management
• career planning
• performance management
• compensation planning
• succession planning
Vendor Overview
Lawson
Shootout scenarios lead by Larry Dunivan, senior vice president (SVP) Global HCM Products, Lawson
Lawson Human Capital Management (HCM) helps HR contribute to organizational excellence with applications that support business operations. By automating administrative processes, the solution helps HR increase efficiency, allowing focus to be placed on more critical initiatives. Lawson offers a stand-alone HCM suite or an integrated ERP system, so HR organizations can align people and processes, all with a low total cost of ownership (TCO).
Plateau
Shootout scenarios lead by Paul Sparta, chief executive officer (CEO), Plateau
Plateau Talent Management helps HR leaders develop, manage, reward, and optimize organizational talent. Its Talent Management Suite includes learning, performance, compensation, and career and succession planning modules which can be deployed independently or together as an integrated talent management solution.
Salary.com
Shootout scenarios lead by Kent Plunkett, CEO, Salary.com
Salary.com is a provider of on-demand talent management, compensation, and payroll solutions. Its software applications, proprietary data, and consulting services help HR and compensation professionals automate, streamline, and optimize critical core HR process such as payroll, benefits, and HR administration as well as talent management processes such as learning and development, compensation planning, performance management, competency management, and succession planning.
SAP
Shootout scenarios lead by David Ludlow, vice president (VP) Suite Solution Management, SAP
The SAP ERP Human Capital Management solution is a integrated human resources management solution that automates all core processes, such as employee administration, including talent management, workforce process management and deployment, and legal reporting.
And the Winner Is…
Niche player Salary.com was the clear winner in all three scenarios—winning out over tier-one vendors Lawson and SAP, as well as fellow niche player Plateau.
Final Note
Industry experts agree that Salary.com will be the vendor to watch out for over the next few years. When asked which vendors had the most complete and integrated offering, Naomi Lee Bloom, managing partner at Bloom & Wallace, mentioned that along with Softscape and Success Factors, Salary.com was a good stand-alone solution for managing talent. From my point of view as a TEC research analyst, based on the demonstration of the product during the shootout, Salary.com’s HR solution was very user-friendly and highly configurable to the organization’s needs, and made it very easy for HR administrators to align employee objectives with the company’s goals.
SAP's New Level of e-Commerce: mySAP.com
On the buy side: The mySAP.com buying solution, using SAP's Business-to-Business Procurement component, supports multiparty transactions directly or via the mySAP.com Marketplace. SAP Business-to-Business Procurement also supports multiple back-office systems - SAP and non-SAP - as well as catalog content management services.
The buying solution will include e-business products that allow real-time integration with legacy systems and non-SAP enterprise resource planning (ERP) applications. SAP has also partnered with Requisite Technology to provide a catalog-finding engine and related content management services. This coupled with an open catalog interface (OCI), will provide customers access to third-party catalogs. The buying solution is also linked via the mySAP.com Marketplace, an online trading community with a business directory of more than 2,500 companies.
On the sell side: The mySAP.com selling solution is designed to support multiple sales channels, including selling to consumers, business partners and resellers. Customers are linked to customers using sell-side solutions based on mySAP.com and other vendors leveraging the XML-based SAP Business Connector. This connection enables buyers and sellers to transmit orders, invoices and other documents through their personalized mySAP.com Workplace portals.
The design of mySAP.com is based on the Internet Business Framework. That means the mySAP.com buying and selling solutions are web-enabled, allowing buyers, sellers, customers and business partners to collaborate in real time.
Market Impact
SAP's announcement represents a crafted approach to the Internet market. By aligning with service providers, establishing an implementation plan, and developing a rich feature set, it stands to distance itself from vendors like PeopleSoft or Baan, both of whom have yet to publish an information rich strategy document.
Recent announcements detail SAP's initiatives with service providers (See TEC News Analysis article: "The First Step in mySAP.com" Janauary 7th, 2000), data and hardware solutions (See TEC News Analysis article: "Oracle gets SAPed by IBM" December 8th, 1999), and partnerships to enhance the development of web based solutions.
Thus the small to midsize ERP market is exposed to a competitive web based solution threaded together by SAP. SAP is clearly on the move to capture market share in the burgeoning business to business industry. We expect further customer partnership and technology announcements within the next 4 to 6 months.
Additionally, companies such as Ariba, Concur and Commerce One continue to shape the market with unique partnerships and solution strategies. Between the pursuit of ERP companies and the digital market place vendors, the goal of highly efficient web integrated solutions is in the future. Both sides have considerable resources to offer and much to gain. Ariba and Concur offer web based HR and procurement functionality but lack the resources, installed base and robust back end integration with the major ERP products, while major ERP players have a large base, but nascent, as yet untested, web products. As a result, we expect significant advances in end to end web integrated ERP solutions within the next 9-12 months.
The buying solution will include e-business products that allow real-time integration with legacy systems and non-SAP enterprise resource planning (ERP) applications. SAP has also partnered with Requisite Technology to provide a catalog-finding engine and related content management services. This coupled with an open catalog interface (OCI), will provide customers access to third-party catalogs. The buying solution is also linked via the mySAP.com Marketplace, an online trading community with a business directory of more than 2,500 companies.
On the sell side: The mySAP.com selling solution is designed to support multiple sales channels, including selling to consumers, business partners and resellers. Customers are linked to customers using sell-side solutions based on mySAP.com and other vendors leveraging the XML-based SAP Business Connector. This connection enables buyers and sellers to transmit orders, invoices and other documents through their personalized mySAP.com Workplace portals.
The design of mySAP.com is based on the Internet Business Framework. That means the mySAP.com buying and selling solutions are web-enabled, allowing buyers, sellers, customers and business partners to collaborate in real time.
Market Impact
SAP's announcement represents a crafted approach to the Internet market. By aligning with service providers, establishing an implementation plan, and developing a rich feature set, it stands to distance itself from vendors like PeopleSoft or Baan, both of whom have yet to publish an information rich strategy document.
Recent announcements detail SAP's initiatives with service providers (See TEC News Analysis article: "The First Step in mySAP.com" Janauary 7th, 2000), data and hardware solutions (See TEC News Analysis article: "Oracle gets SAPed by IBM" December 8th, 1999), and partnerships to enhance the development of web based solutions.
Thus the small to midsize ERP market is exposed to a competitive web based solution threaded together by SAP. SAP is clearly on the move to capture market share in the burgeoning business to business industry. We expect further customer partnership and technology announcements within the next 4 to 6 months.
Additionally, companies such as Ariba, Concur and Commerce One continue to shape the market with unique partnerships and solution strategies. Between the pursuit of ERP companies and the digital market place vendors, the goal of highly efficient web integrated solutions is in the future. Both sides have considerable resources to offer and much to gain. Ariba and Concur offer web based HR and procurement functionality but lack the resources, installed base and robust back end integration with the major ERP products, while major ERP players have a large base, but nascent, as yet untested, web products. As a result, we expect significant advances in end to end web integrated ERP solutions within the next 9-12 months.
Is SCT And Logistics.com Partnership A Déjà vu?
The partnership combines SCT's iProcess.sct collaborative planning, network optimization, supply chain planning and execution (SCP&E) capabilities with Logistics.com's OptiManage transportation management and execution solution. The companies believe the integrated offering will enable significant manufacturing and logistics cost reductions and improved customer service for process manufacturers and distributors. Logistics.com's OptiManage will be used in the integrated solution to handle load consolidation, optimal carrier selection, routing optimization, tendering and real-time tracking and tracing of road-based shipments in North America. SCT's iProcess.sct solution will provide all other required supply chain planning, execution and Relationship Network Management.
As the announcement happened not that long after a very similar alliance (and with almost identical PR rhetoric) with another prominent transportation procurement provider, G-Log (see SCT and G-Log Form Alliance For Collaborative Logistics in the Process Industries), many have wondered whether the partnerships with both vendors with an expertise in logistics were indeed necessary.
Market Impact
At first sight, this is a win-win situation for two companies that have been ebullient lately in their respective complementary strongholds (see SCT Corporation Means (e)Business For Process Manufacturing and Logistics.com Might Prove An Internet Success Story After All). The growth of industry specific, vertical solutions continues with concurrent internal development, acquisitions and partnerships, and the notion of an "end-to-end" solution continues to evolve. When it comes to transportation procurement and execution, most ERP vendors, even those with strong native transportation planning capabilities (e.g., J.D. Edwards), have had to turn to the partnership option.
With this partnership, the process industries should have a broadened definition of what is achievable, as any software vendor that strives to offer its clients an end-to-end supply chain management (SCM) solution should also provide logistics capabilities, especially considering the payback potential of these solutions. The partnership in case should enhance the value proposition of SCT's product suite, while providing Logistics.com with an opportunity to gain additional traction in the process industries.
Within the process manufacturing and distribution segment, the alliance seems to be a good fit, as both companies have sound customer lists, particularly within the food and retail segments. For example, Logistics .com touts names like Kraft, Colgate, Georgia Pacific, PPG and Schreiber Cheese, while SCT has Cargill, SmithKline Beecham, Godiva Chocolate, Akzo Nobel Organon, and Smithfields.
SCT continues to execute well in the operational level-centric applications, unlike most of the other process ERP wannabes who are still selling generic 'white collar' applications (e.g., HR, financial accounting, procurement) into the process industries.
Logistics.com, on the other hand, needs to tie its execution modules into the plant-level applications in order to give a customer a full solution. In process industries, the control of material extends further into the supply chain (owing for example to recalls, product shelf life and/or aging repercussions, etc.). Therefore, tying these two products together should be beneficial, as Logistics.com cannot provide the execution without a backbone system that does its part of the business, while SCT cannot deliver the full job without the transportation part.
As for the SCT's G-Log and Logistics.com alliances conflict, it is less than meets the eye. While G-Log and Logistics.com can be seen as competitors in a very broad sense, they specialize in different things. G-Log mastery is in global and multi-modal transportation (truck, rail, air, and ocean - and any combination of these), while Logistics.com excels at North America-only and truck-only transportation. Logistics.com also has a strong offering in the transportation e-procurement side (with its OptiBid product), while G-Log does not really target that market. Furthermore, the partnerships also seem to be complementary marketwise, considering G-Log's experience in the chemicals industry, with customers like Dupont and ShipChem, whereas Logistics.com has stronger penetration in the food market segment.
User Recommendations
Key to process manufacturing and distribution companies is that this partnership addresses major bases of operational efficiency, cost, and customer service. The combination allows these companies to address the extended supply chain with the added bonus of process industry specific solutions. Process industry companies should consider the combined SCT - Logistics.com solution if they are looking for a supply chain planning solution that includes a transportation planning component. Also, installed base clients of either Logistics.com or SCT should consider the possible cross selling of products of the other company, bearing in mind the availability of standard interfaces between the products. The key tenets of success are the tight integration and a single point of contact, which points to having an immaculate channel with expertise in both product lines.
Existing SCT iProcess.sct customers should evaluate the Logistics.com applications as a way to both add value to their existing iProcess.sct applications and resolve their logistics requirements. Existing process industry Logistics.com customers looking for added functionality of the related areas of SCM, e-business, CRM or ERP should evaluate SCT regardless of their incumbent vendor relationships. Process companies considering new solutions in the supply chain, e-business, or ERP areas should place SCT on their short list. These companies should consider the added functionality from this partnership for an addition to their requirements list.
As the announcement happened not that long after a very similar alliance (and with almost identical PR rhetoric) with another prominent transportation procurement provider, G-Log (see SCT and G-Log Form Alliance For Collaborative Logistics in the Process Industries), many have wondered whether the partnerships with both vendors with an expertise in logistics were indeed necessary.
Market Impact
At first sight, this is a win-win situation for two companies that have been ebullient lately in their respective complementary strongholds (see SCT Corporation Means (e)Business For Process Manufacturing and Logistics.com Might Prove An Internet Success Story After All). The growth of industry specific, vertical solutions continues with concurrent internal development, acquisitions and partnerships, and the notion of an "end-to-end" solution continues to evolve. When it comes to transportation procurement and execution, most ERP vendors, even those with strong native transportation planning capabilities (e.g., J.D. Edwards), have had to turn to the partnership option.
With this partnership, the process industries should have a broadened definition of what is achievable, as any software vendor that strives to offer its clients an end-to-end supply chain management (SCM) solution should also provide logistics capabilities, especially considering the payback potential of these solutions. The partnership in case should enhance the value proposition of SCT's product suite, while providing Logistics.com with an opportunity to gain additional traction in the process industries.
Within the process manufacturing and distribution segment, the alliance seems to be a good fit, as both companies have sound customer lists, particularly within the food and retail segments. For example, Logistics .com touts names like Kraft, Colgate, Georgia Pacific, PPG and Schreiber Cheese, while SCT has Cargill, SmithKline Beecham, Godiva Chocolate, Akzo Nobel Organon, and Smithfields.
SCT continues to execute well in the operational level-centric applications, unlike most of the other process ERP wannabes who are still selling generic 'white collar' applications (e.g., HR, financial accounting, procurement) into the process industries.
Logistics.com, on the other hand, needs to tie its execution modules into the plant-level applications in order to give a customer a full solution. In process industries, the control of material extends further into the supply chain (owing for example to recalls, product shelf life and/or aging repercussions, etc.). Therefore, tying these two products together should be beneficial, as Logistics.com cannot provide the execution without a backbone system that does its part of the business, while SCT cannot deliver the full job without the transportation part.
As for the SCT's G-Log and Logistics.com alliances conflict, it is less than meets the eye. While G-Log and Logistics.com can be seen as competitors in a very broad sense, they specialize in different things. G-Log mastery is in global and multi-modal transportation (truck, rail, air, and ocean - and any combination of these), while Logistics.com excels at North America-only and truck-only transportation. Logistics.com also has a strong offering in the transportation e-procurement side (with its OptiBid product), while G-Log does not really target that market. Furthermore, the partnerships also seem to be complementary marketwise, considering G-Log's experience in the chemicals industry, with customers like Dupont and ShipChem, whereas Logistics.com has stronger penetration in the food market segment.
User Recommendations
Key to process manufacturing and distribution companies is that this partnership addresses major bases of operational efficiency, cost, and customer service. The combination allows these companies to address the extended supply chain with the added bonus of process industry specific solutions. Process industry companies should consider the combined SCT - Logistics.com solution if they are looking for a supply chain planning solution that includes a transportation planning component. Also, installed base clients of either Logistics.com or SCT should consider the possible cross selling of products of the other company, bearing in mind the availability of standard interfaces between the products. The key tenets of success are the tight integration and a single point of contact, which points to having an immaculate channel with expertise in both product lines.
Existing SCT iProcess.sct customers should evaluate the Logistics.com applications as a way to both add value to their existing iProcess.sct applications and resolve their logistics requirements. Existing process industry Logistics.com customers looking for added functionality of the related areas of SCM, e-business, CRM or ERP should evaluate SCT regardless of their incumbent vendor relationships. Process companies considering new solutions in the supply chain, e-business, or ERP areas should place SCT on their short list. These companies should consider the added functionality from this partnership for an addition to their requirements list.
BUY.COM Called "911" For Help
Market Impact
Recently AT&T announced development plans for a network architecture to support Application Service Providers (See our article: "AT&T's Ecosphere"). To develop the "Ecosphere" AT&T has partnered with industry giants such as Sun Microsystems, Cisco Systems, and Hewlett-Packard. The move is indicative of an emerging trend: Companies partnering with "like" technology to create market synergy.
The big players are not the only ones coming together. On January 27, 2000 Breakaway Technology announced its purchase of Eggrock Partners for $250 million in stock. The deal unites a technology systems integrator with an Application Service Provider model. Service911.com and BUY.COM's relationship is another example of how companies are using ASP models to support business needs.
In August of 1999, IDC reported spending in the ASP space could reach 2 billion by 2003. In October 1999, Dataquest predicted the entire ASP market would reach 22.7 billion by 2003. While groups attempt to define the size and evolution of the market, one message is clear; the Application Service Provider market is growing. As a result we expect to see increased alliances and partnerships as companies discover cost effective means to support their businesses.
User Recommendations
ASP's:
The Service911.com announcement represents another positive endorsement for the ASP model. Similar solution providers could use the release as an additional "arrow in the quiver." By leveraging the Service911.com/BUY.COM deal as a success story, ASPs might show potential clients how remotely hosted applications can benefit their businesses.
Companies Considering ASP solutions:
If you are considering outsourcing services or applications via the Internet, shop around. It seems there are ASP solutions for everything. From e-mail to Human Resources to complete Enterprise Resource Planning solutions, there are ASPs willing to help.
Avoid the hype by asking critical questions. Evaluate how the ASP will benefit your organization. Investigate the depth of application and hosting knowledge they report. Obtain the number of "in-house" representatives available to you. Be sure to clarify how and when these "expert" resources are available to you and your customers. Also, keep in mind current bandwidth issues in relation to your future traffic estimates.
If the Application Service Provider model suits your needs, use the information it provides to compare and identify unique offerings. The result may be a strong ASP partnership that adds synergy to your business .
Recently AT&T announced development plans for a network architecture to support Application Service Providers (See our article: "AT&T's Ecosphere"). To develop the "Ecosphere" AT&T has partnered with industry giants such as Sun Microsystems, Cisco Systems, and Hewlett-Packard. The move is indicative of an emerging trend: Companies partnering with "like" technology to create market synergy.
The big players are not the only ones coming together. On January 27, 2000 Breakaway Technology announced its purchase of Eggrock Partners for $250 million in stock. The deal unites a technology systems integrator with an Application Service Provider model. Service911.com and BUY.COM's relationship is another example of how companies are using ASP models to support business needs.
In August of 1999, IDC reported spending in the ASP space could reach 2 billion by 2003. In October 1999, Dataquest predicted the entire ASP market would reach 22.7 billion by 2003. While groups attempt to define the size and evolution of the market, one message is clear; the Application Service Provider market is growing. As a result we expect to see increased alliances and partnerships as companies discover cost effective means to support their businesses.
User Recommendations
ASP's:
The Service911.com announcement represents another positive endorsement for the ASP model. Similar solution providers could use the release as an additional "arrow in the quiver." By leveraging the Service911.com/BUY.COM deal as a success story, ASPs might show potential clients how remotely hosted applications can benefit their businesses.
Companies Considering ASP solutions:
If you are considering outsourcing services or applications via the Internet, shop around. It seems there are ASP solutions for everything. From e-mail to Human Resources to complete Enterprise Resource Planning solutions, there are ASPs willing to help.
Avoid the hype by asking critical questions. Evaluate how the ASP will benefit your organization. Investigate the depth of application and hosting knowledge they report. Obtain the number of "in-house" representatives available to you. Be sure to clarify how and when these "expert" resources are available to you and your customers. Also, keep in mind current bandwidth issues in relation to your future traffic estimates.
If the Application Service Provider model suits your needs, use the information it provides to compare and identify unique offerings. The result may be a strong ASP partnership that adds synergy to your business .
Tuesday, April 6, 2010
Other Planning and Database Issues RPF2
D. Synchronization of Usage
GIS datasets employed in government or by utilities will have many users. One portion of the dataset may be in demand simultaneously by several users as well as by staff charged with updating and adding new information. Making sure that all users have access to current data whenever they need it can be a difficult challenge for GIS design. Uncontrolled usage may be confusing to all users, but the greatest danger is that users may actually find themselves interfering with the project workflow or even undoing one another's work.
E. Update Responsibility
Some GIS datasets will never be "complete." Cities and utility territories keep growing and changing and the database must be constantly updated to reflect these changes. But these changes occur on varying schedules and at varying speeds. Procedures must be developed to record, check, and enter these changes in the GIS database. Furthermore, it may be important to maintain a record of the original data. In large GIS projects, updating the database may be the responsibility of a full-time staff.
F. Minimization of Redundancy
In large GIS projects, every byte counts. If a database is maintained for 30-50 years, every blank field and every duplicated byte of information will incur storage costs for the full length of the project. Not only will wasted storage space waste money, it will also slow performance. This is why in large, long-term GIS projects, great attention to devoted to packing data as economically as possible and reducing duplication of information.
G. Data Independence and Upgrade Paths
A GIS database will almost always outlive the hardware and software that is used to create it. Computer hardware has a useable life of 2-5 years, software is sometimes upgraded several times a year. If a GIS database is totally dependent on a single hardware platform or a single software system, it too will have to be upgraded just as often. Therefore, it is best to create a database that is as independent as possible of hardware and software. Through careful planning and design, data can be transferred as ASCII files or in some metadata or exchange format from system to system. There is nothing worse than having data held in a proprietary vendor-supported format and then finding that the vendor has changed or abandoned that format.
In this way, GIS designers should think ahead to possible upgrade paths for their database. It is notoriously difficult to predict what will happen next in the world of computers and information technology. To minimize possible problems, thought should be given to making the GIS database as independent as possible of the underlying software and hardware.
GIS datasets employed in government or by utilities will have many users. One portion of the dataset may be in demand simultaneously by several users as well as by staff charged with updating and adding new information. Making sure that all users have access to current data whenever they need it can be a difficult challenge for GIS design. Uncontrolled usage may be confusing to all users, but the greatest danger is that users may actually find themselves interfering with the project workflow or even undoing one another's work.
E. Update Responsibility
Some GIS datasets will never be "complete." Cities and utility territories keep growing and changing and the database must be constantly updated to reflect these changes. But these changes occur on varying schedules and at varying speeds. Procedures must be developed to record, check, and enter these changes in the GIS database. Furthermore, it may be important to maintain a record of the original data. In large GIS projects, updating the database may be the responsibility of a full-time staff.
F. Minimization of Redundancy
In large GIS projects, every byte counts. If a database is maintained for 30-50 years, every blank field and every duplicated byte of information will incur storage costs for the full length of the project. Not only will wasted storage space waste money, it will also slow performance. This is why in large, long-term GIS projects, great attention to devoted to packing data as economically as possible and reducing duplication of information.
G. Data Independence and Upgrade Paths
A GIS database will almost always outlive the hardware and software that is used to create it. Computer hardware has a useable life of 2-5 years, software is sometimes upgraded several times a year. If a GIS database is totally dependent on a single hardware platform or a single software system, it too will have to be upgraded just as often. Therefore, it is best to create a database that is as independent as possible of hardware and software. Through careful planning and design, data can be transferred as ASCII files or in some metadata or exchange format from system to system. There is nothing worse than having data held in a proprietary vendor-supported format and then finding that the vendor has changed or abandoned that format.
In this way, GIS designers should think ahead to possible upgrade paths for their database. It is notoriously difficult to predict what will happen next in the world of computers and information technology. To minimize possible problems, thought should be given to making the GIS database as independent as possible of the underlying software and hardware.
Other Planning and Database Issues on RPF1
The project planning cycle outlines a process, but the issues that must be addessed at each stage of this process will vary considerably from organization to organization. Some topics are of critical importance to large municipal, state, and private AM/FM applications, but less so for research applications of limited scope. Among the issues that must be addressed in large GIS projects are:
A. Security
The security of data is always a concern in large GIS projects. But there is more to security than protecting data from malicious tampering or theft. Security also means that data is protected from system crashes, major catastrophes, and inappropriate uses. As a result, security must be considered at many levels and must anticipate many potential problems. GIS data maintained by government agencies often presents difficult challenges for security. While some sorts of data must be made publicly accessible under open records laws, other types are protected from scrutiny. If both types are maintained within a single system, managing appropriate access can be difficult. Distribution of data across open networks is always a matter of concern.
B. Documentation
Most major GIS datasets will outlive the people who create them. Unless all the steps involved in coding and creating a dataset are documented, this information will be lost as staff retire or move to new positions. Documentation must begin at the very start of GIS project and continue through its life. It is best, perhaps, to actually assign a permanent staff to documentation to make sure that the necessary information is saved and revised in a timely fashion.
C. Data Integrity and Accuracy
When mistakes are discovered in a GIS database, there must be a well-defined procedure for their correction (and for documenting these corrections). Furthermore, although many users may have to use the information stored in a GIS database, not all of these users should be permitted to make changes. Maintaining the integrity of the different layers of data in a comprehensive GIS database can be a challenging task. A city's water utility may need to look at GIS data about right-of-ways for power and cable utilities, but it should not be allowed to change this data. Responsibility for changing and correcting data in the different layers must be clearly demarcated among different agencies and offices.
A. Security
The security of data is always a concern in large GIS projects. But there is more to security than protecting data from malicious tampering or theft. Security also means that data is protected from system crashes, major catastrophes, and inappropriate uses. As a result, security must be considered at many levels and must anticipate many potential problems. GIS data maintained by government agencies often presents difficult challenges for security. While some sorts of data must be made publicly accessible under open records laws, other types are protected from scrutiny. If both types are maintained within a single system, managing appropriate access can be difficult. Distribution of data across open networks is always a matter of concern.
B. Documentation
Most major GIS datasets will outlive the people who create them. Unless all the steps involved in coding and creating a dataset are documented, this information will be lost as staff retire or move to new positions. Documentation must begin at the very start of GIS project and continue through its life. It is best, perhaps, to actually assign a permanent staff to documentation to make sure that the necessary information is saved and revised in a timely fashion.
C. Data Integrity and Accuracy
When mistakes are discovered in a GIS database, there must be a well-defined procedure for their correction (and for documenting these corrections). Furthermore, although many users may have to use the information stored in a GIS database, not all of these users should be permitted to make changes. Maintaining the integrity of the different layers of data in a comprehensive GIS database can be a challenging task. A city's water utility may need to look at GIS data about right-of-ways for power and cable utilities, but it should not be allowed to change this data. Responsibility for changing and correcting data in the different layers must be clearly demarcated among different agencies and offices.
Applying the Insights of Project Lifecycle to Research Projects
The concepts of lifecycle planning can be applied to projects of lesser scale and scope, particularly to those pursued in undergraduate and graduate research. This does not mean that every project will move through every step outlined above. Some steps such as benchmarking and system selection may be irrelevant in a setting where the researcher must make do with whatever equipment and software is on hand. But lifecycle planning should not be viewed as a series of boxes on a checklist, it is a process of careful planning and problem solving. It is this process of careful planning that should be emulated regardless of the scope or scale of a project.
This point is not always understood. Some researchers reject the methodology of project planning because it seems overly formal and stringent given their modest research goals. Instead, they improvise a GIS solution. But improvised solutions are always a risk. Attention to the process of careful planning can waylay such risks. Perhaps the essence of this process can be summarized in three points.
1. Think ahead to how the GIS will be used, but keep in mind what sources are available.
Designing an effective GIS involves setting clear goals. The temptation is to rush ahead and begin digitizing and converting data without establishing how the system will be used. Even for small GIS projects, it is wise to engage in a modest functional requirements study. This allows the user to gain an idea of exactly what data sources are required, how they will be processed, and what final products are expected. Without clear-cut goals, there is too great a danger that a project will omit key features or include some that are irrelevant to the final use.
2. Exert special care in designing and creating the database.
Again, it is easy to rush ahead with the creation of a database, and then find later that it has to be reorganized or altered extensively. It is far more economical to get things right the first time. This means that the researcher should chart out exactly how the database is to be organized and to what levels of accuracy and precision. Attention to (and testing) of symbolization and generalization will also pay off handsomely.
3. Always develop a prototype or sample database to test the key features of the system.
No matter the size of a project, the researcher should aim to create a prototype first before moving toward full implementation of a GIS. This allows the researcher move through all of the steps of creating and using the system to see that all procedures and algorithms work as expected. The prototype can be a small area or may be confined to one or two of the most critical layers. In either case, testing a prototype is one step that should not be overlooked.
This point is not always understood. Some researchers reject the methodology of project planning because it seems overly formal and stringent given their modest research goals. Instead, they improvise a GIS solution. But improvised solutions are always a risk. Attention to the process of careful planning can waylay such risks. Perhaps the essence of this process can be summarized in three points.
1. Think ahead to how the GIS will be used, but keep in mind what sources are available.
Designing an effective GIS involves setting clear goals. The temptation is to rush ahead and begin digitizing and converting data without establishing how the system will be used. Even for small GIS projects, it is wise to engage in a modest functional requirements study. This allows the user to gain an idea of exactly what data sources are required, how they will be processed, and what final products are expected. Without clear-cut goals, there is too great a danger that a project will omit key features or include some that are irrelevant to the final use.
2. Exert special care in designing and creating the database.
Again, it is easy to rush ahead with the creation of a database, and then find later that it has to be reorganized or altered extensively. It is far more economical to get things right the first time. This means that the researcher should chart out exactly how the database is to be organized and to what levels of accuracy and precision. Attention to (and testing) of symbolization and generalization will also pay off handsomely.
3. Always develop a prototype or sample database to test the key features of the system.
No matter the size of a project, the researcher should aim to create a prototype first before moving toward full implementation of a GIS. This allows the researcher move through all of the steps of creating and using the system to see that all procedures and algorithms work as expected. The prototype can be a small area or may be confined to one or two of the most critical layers. In either case, testing a prototype is one step that should not be overlooked.
Planning Schedules and the Scope of Prototype and Pilot Projects
There is nothing wrong with being cautious during the process of project planning. Rushing through the procedure exposes an organization to potentially costly mistakes. Large AM/FM projects typically take many years to reach the prototype or pilot stages.
Once a prototype or pilot has been approved, even more time will elapse before full implementation is achieved. Some municipal GIS projects have been underway for over a decade and still have far to go before complete implementation and compilation of a full dataset .
Prototype and pilot projects are kept small, as is indicated in the following table. Remember, prototypes and pilots are intended to demonstrate functions and interfaces. What works best is a carefully selected test area that presents examples of common workflows. Its areal size of is little consequence in most applications.
System Selection as a Compromise Step by Steps
A. Some applications, such as emergency vehicle dispatch (911 systems), require high performance speed. Lives are at stake and the system must be able to match telephone numbers to addresses and dispatch vehicles instantly. At the same time, an emergency dispatch system will only be used to serve this single function and the database will contain only a street grid, address ranges, and links to telephone numbers.
B. Some applications, such as those undertaken by water, gas, and power utilities, involve storing vast quantities of information about huge service territories. Some utilities serve hundreds or thousands of square miles of territory. Detailed information must be maintained about all facilities within these territories. Managing these quantities of information is a key to selecting the right GIS system. At the same time, speed of response may be less of a concern since a given piece of information may only have to accessed once a month or even once a year. Furthermore, functional richness may be useful, but many tasks (such as maintenance and planning) will require a limited range of analytical capabilities.
C. Some applications, such as those related to urban planning and environmental management, may benefit most from great functional richness. Planning and management tasks may be many and varied, meaning that users must have access to a wide range of spatial and statistical functions. These may not be used often but, when used, may be essential to the success of a project.
D. Some GIS may be used frequently by users with little training or in situations where there will be high staff turnover. This is a critical consideration for GIS that are used as part of management or executive information systems. Upper-level managers who can benefit greatly from the information provided by a GIS may have limited time (or inclination) for training. It is important in these situations to consider the time it takes to bring new users up to speed with a new system.
Of course, these are only a few of the factors and scenarios that arise in GIS system selection. Compromises may have to be achieved with other system features.
Too often, users imagine that they can find the "perfect" or "best" GIS. The best GIS is always the one that gets a job done at the right price and on schedule.
B. Some applications, such as those undertaken by water, gas, and power utilities, involve storing vast quantities of information about huge service territories. Some utilities serve hundreds or thousands of square miles of territory. Detailed information must be maintained about all facilities within these territories. Managing these quantities of information is a key to selecting the right GIS system. At the same time, speed of response may be less of a concern since a given piece of information may only have to accessed once a month or even once a year. Furthermore, functional richness may be useful, but many tasks (such as maintenance and planning) will require a limited range of analytical capabilities.
C. Some applications, such as those related to urban planning and environmental management, may benefit most from great functional richness. Planning and management tasks may be many and varied, meaning that users must have access to a wide range of spatial and statistical functions. These may not be used often but, when used, may be essential to the success of a project.
D. Some GIS may be used frequently by users with little training or in situations where there will be high staff turnover. This is a critical consideration for GIS that are used as part of management or executive information systems. Upper-level managers who can benefit greatly from the information provided by a GIS may have limited time (or inclination) for training. It is important in these situations to consider the time it takes to bring new users up to speed with a new system.
Of course, these are only a few of the factors and scenarios that arise in GIS system selection. Compromises may have to be achieved with other system features.
Too often, users imagine that they can find the "perfect" or "best" GIS. The best GIS is always the one that gets a job done at the right price and on schedule.
System Selection as a Compromise

In selecting a software and hardware combination, users are often faced with a number of compromises. For a given price, a system cannot be expected to do everything. A thoughtful choice is required in order to select the system that will best meet the prinicipal aims of a given project. The diagram below helps to show how users might attempt to balance four of the many characteristics of a given system. In these cases, the compromises involve:
Speed: The speed with which a system can respond to queries and achieve solutions.
Functional richness: The analytical capabilities of the system and its flexibility in addressing a wide range of spatial and statistical problems.
Database Size: The ability to handle large quantities of spatial and statistical data.
Training: The amount of time required to bring users up to speed on a system and to use the database on a regular basis.
System Selection and Benchmarking
Every system has plusses and minuses, and marketing literature generally plays up the plusses and plays down the minuses.
Benchmarking is a process which minimizes the risks associated with system selection by testing each system's exact capabilities. A test dataset is run on each system under consideration to determine how well it handles the functional requirements of the project.
The same series of tests should be run on each system and should be designed to test specific capabilities along with general user-friendliness and ease-or-use.
Benchmarking is the time to determine the flexibility of each system. For example:
+ can changes be made to the database structure after the initial setup and, if so, how difficult are such changes?
+ Can user-defined functions be added to the system?
+ Can custom applications be created?
+ Is there a programmer's interface for the development of such applications?
+ Does the system have adequate security feature built-in?
+ What are the networking options?
+ Are response times significantly different during periods of high and low-loading?
Risk Analysis
* possible risks:
o hardware or software may not live up to expectations
o cost of implementing GIS may be higher than current system
Set goal and estimate cost for next step
System Development and Detail design
After a specific system has been chosen, each of the following are defined during system development:
* database specifications
* graphics specifications
* report specifications
* interfaces
* calculations
* specialized applications
Benchmarking is a process which minimizes the risks associated with system selection by testing each system's exact capabilities. A test dataset is run on each system under consideration to determine how well it handles the functional requirements of the project.
The same series of tests should be run on each system and should be designed to test specific capabilities along with general user-friendliness and ease-or-use.
Benchmarking is the time to determine the flexibility of each system. For example:
+ can changes be made to the database structure after the initial setup and, if so, how difficult are such changes?
+ Can user-defined functions be added to the system?
+ Can custom applications be created?
+ Is there a programmer's interface for the development of such applications?
+ Does the system have adequate security feature built-in?
+ What are the networking options?
+ Are response times significantly different during periods of high and low-loading?
Risk Analysis
* possible risks:
o hardware or software may not live up to expectations
o cost of implementing GIS may be higher than current system
Set goal and estimate cost for next step
System Development and Detail design
After a specific system has been chosen, each of the following are defined during system development:
* database specifications
* graphics specifications
* report specifications
* interfaces
* calculations
* specialized applications
Key Aspects of Project Lifecycle
Three aspects of this planning process merit special attention.
1. Setting goals and estimating costs.
Each stage of the project lifecycle process involves setting clear goals for the next step and estimating the cost of reaching those goals. If the necessary funds or time are unavailable, it is better to stop the process than to continue and see the project fail. The process can begin again when funds are available.
2. The functional requirements study.
The functional requirements study is arguably the most important single step in the planning process. Here, careful study is devoted to what information is required for a project, how it is to be used, and what final products will be produced by the project. For a large organization, this amounts to a "map" of how information flows into, around, and out of each office and agency. The FRS also specifies how often particular types of information are needed and by whom. Furthermore, the FRS can look into the future to anticipate types of data processing tasks that expand upon or enhance the organization's work.
By assessing information flows so carefully, the FRS allows an organization to set goals for all of the subsequent steps in the lifecycle planning process. The FRS also allows an organization to consider information flows across all the domains of its work, forcing it to consider how different systems will be integrated. Without taking an encompassing view of information flows, a project implemented in one unit may be of no use to another. It is important to take this broad view of information flows to avoid stranding projects between incompatible systems.
3. The creation of a prototype.
By the time a project has moved into the development stage, the greatest temptation is to jump forward to full implementation. This is a very risky path, for it leaves out the prototyping stage. Prototypes are a critical step because they allow the system to be tested and calibrated to see whether it meets expectations and goals. Making adjustments at the prototype stage is far easier than later, after full implementation. The prototype also allows users to gain a feel for a new system and to estimate how much time (in training and conversion) will be required to move to the pilot and full implementation stages. Finally, a successful prototype can help enlist support and funding for the remaining steps in the lifecycle planning process.
As is noted in the module on Managing Error , the prototype provides a good opportunity for undertaking sensitivity analysis--testing to see how variations in the quality of inputs affects outputs of the system. These tests are essential for specifying the accuracy, precision, and overall quality of the data that will be created during the conversion process. If these analyses are not performed, there is a chance that much time and effort will be wasted later.
1. Setting goals and estimating costs.
Each stage of the project lifecycle process involves setting clear goals for the next step and estimating the cost of reaching those goals. If the necessary funds or time are unavailable, it is better to stop the process than to continue and see the project fail. The process can begin again when funds are available.
2. The functional requirements study.
The functional requirements study is arguably the most important single step in the planning process. Here, careful study is devoted to what information is required for a project, how it is to be used, and what final products will be produced by the project. For a large organization, this amounts to a "map" of how information flows into, around, and out of each office and agency. The FRS also specifies how often particular types of information are needed and by whom. Furthermore, the FRS can look into the future to anticipate types of data processing tasks that expand upon or enhance the organization's work.
By assessing information flows so carefully, the FRS allows an organization to set goals for all of the subsequent steps in the lifecycle planning process. The FRS also allows an organization to consider information flows across all the domains of its work, forcing it to consider how different systems will be integrated. Without taking an encompassing view of information flows, a project implemented in one unit may be of no use to another. It is important to take this broad view of information flows to avoid stranding projects between incompatible systems.
3. The creation of a prototype.
By the time a project has moved into the development stage, the greatest temptation is to jump forward to full implementation. This is a very risky path, for it leaves out the prototyping stage. Prototypes are a critical step because they allow the system to be tested and calibrated to see whether it meets expectations and goals. Making adjustments at the prototype stage is far easier than later, after full implementation. The prototype also allows users to gain a feel for a new system and to estimate how much time (in training and conversion) will be required to move to the pilot and full implementation stages. Finally, a successful prototype can help enlist support and funding for the remaining steps in the lifecycle planning process.
As is noted in the module on Managing Error , the prototype provides a good opportunity for undertaking sensitivity analysis--testing to see how variations in the quality of inputs affects outputs of the system. These tests are essential for specifying the accuracy, precision, and overall quality of the data that will be created during the conversion process. If these analyses are not performed, there is a chance that much time and effort will be wasted later.
The Value of a Problem-solving Approach
Lifecycle planning is really a process of practical problem solving applied to all aspects of a GIS development project. Particular care is exerted in defining the nature of a problem or new requirement, estimating the costs and feasibility of proceeding, and developing a solution. This process should not be abridged; each step is important to the overall process. If this problem solving approach is applied to the design and creation of an entire GIS project a few additional subtasks must be addressed, as in the diagram below.
This does not mean that information technologies have been a failure. Rather, these systems allow users to accomplish a greater range of varied and complex tasks, but at a higher cost. Users are not so much doing their previous work at faster speeds, but assuming new tasks offered by the new technologies. Support staff once satisfied with producing in-house documents may now be tempted to issue them using desktop publishing software or on-line in the Worldwide Web. Cartographers once satisfied with producing discrete utility maps for individual construction projects may be tempted to create an encompassing map and GIS database containing maintenance records for an entire city.
It is generally recognized that, for the foreseeable future, most information technologies projects will have to be justified on the basis of a "do more, pay more" philosophy. This means that effective lifecycle planning is all the more important. In the past, projected existing costs could be used as a baseline against which improvements could be measured. If the cost curve for new information technologies is always above the baseline, then greater care must be exerted in setting goals, establishing targets, and estimating budgets. There is far too great a danger that, in the absence of such checks and balances, a project may grow out of control.
The Importance of Project Planning
GIS projects are expensive in terms of both time and money. Municipal GIS and facilities management projects developed by utilities may take a decade or more to bring on-line at a cost of tens or hundreds of millions of dollars. Careful planning at the outset, as well as during the project, can help to avoid costly mistakes. It also provides assurance that a GIS will accomplish it goals on schedule and within budget.
There is a temptation, when a new technology like GIS becomes available, to improvise a solution to its use, that is to get started without considering where the project will lead. The greatest danger is that decisions made in haste or on the spur of the moment will have to be reversed later or will prove too costly to implement, meaning a GIS project may have to be abandoned. To avoid disappointing experiences like these, GIS professionals have developed a well-defined planning methodology often referred to as project lifecycle. Lifecycle planning involves setting goals, defining targets, establishing schedules, and estimating budgets for an entire GIS project.
The original impetus for developing effective lifecycle planning was cost containment. For many decades, the rationale for implementing new information technologies was that, in the long run, such projects would reduce the cost of business operations.
Subscribe to:
Comments (Atom)