What is a backup? A backup is a copy of a database file that can be used if the original is lost, damaged, or otherwise irretrievable.
How a backup is used in a shared environment? A backup has many uses in a shared environment and can be critical in the daily operation and use of the database(s) being hosted. A backup is preserved to ensure the integrity and protection of your data.
How a backup is created? The FileMaker Server Admin Console has the ability to define backup schedules to be performed on a desired database. It also has the ability to define where these backup files are saved. When the backup schedule is performed, FileMaker Server will pause the databases and then make a copy of them to the backup folder that is defined.
Importing and exporting data. There may be a scenario where you want to export the most recent data from the current hosted database and import that data into a backup copy that is going to be reverted to. In order to do this, we want to export the records from the original database by navigating to File > Export Records… and specifying the fields to export. Keep in mind that if the database has multiple tables, an export will need to be performed for each table. After the exported file has been created, you then want to open a local copy of the backup file with FileMaker Pro. Once the backup copy is open, you can import the records from the original database by navigating to File > Import Records > File… and selecting the export file that was created. Be sure that you are viewing the layout/table that you would like to import those records into. After the records have been imported into the backup, you can then revert to this backup and host it with FileMaker Server.
Reverting to a backup. In the event that you need to revert to a backup, you first want to close the original database that you will be replacing with a backup. To do this, click the “Databases” option in the left pane of the Admin Console to view the list of hosted databases. Find and select the database you would like to replace and then from the drop-down menu at the top, select “Close” and then click “Perform Action.” Once you see the status of the database change to “Closed”, click the drop-down menu at the top again and choose “Remove” and click “Perform Action.” This process will not delete the database from the hard drive, but it will place the database file in the following path: /FileMaker Server/Data/Databases/Removed_by_FMS/.
After the problematic copy has been removed from the Admin Console, you can then upload the backup copy from the defined backup folder by navigating to Server > Upload Database. During the upload process, you will have the option to have the file open automatically after upload is complete; it is best to leave this option enabled. After this process has been completed, users will be able to access the backup copy just as they had with the original copy.
Importance of the backup in the modern world. With the advancement and efficiency of technology, there is no reason why backups should not be made. It is very easy to create and schedule backups, but it is also important to create backups due to the unexpected technological failure that can still occur with hardware and software.
Data security and integrity. Backups preserve the important data inside a database in case something unexpected occurs and the original file is no longer useful. Since the backup is an exact copy of the original, you can be sure that the integrity of your data is retained with each backup so long as the data inside is not damaged.
Safety net. To put it simply, backups make for a great safety net. Just in case some accidental or malicious action occurs, there is a backup available to revert to. The backup ensures that no matter what happens to the original, you will have a safely stored copy available elsewhere for use if necessary.
Using the Admin Console. Using the Admin Console, you can schedule backups, define folder paths and control everything related to FileMaker Server’s activities.
Defining a backup folder path. In the left pane of the Admin Console, under the ‘Configuration’ section, you see a ‘Database Server’ option. Select this and in the right pane should display the Database Server configuration panel. Click the “Default Folders” tab and you will see a “Backup Folder” section below the “Database Folders” section. By default, backups will be saved to the “Backups” folder in the FileMaker Server installation path. If you want to change the location of where backups are saved, type a valid path into the “Path:” field, in accordance with the syntax example shown just below that field.
Creating a schedule. After the backup folder has been defined, you can schedule backups to be performed. In the left pane of the Admin Console, under the ‘Administration’ section, you see a ‘Schedules’ option. Select this and in the right pane should display the Schedules configuration panel. From the drop-down menu at the top, select “Create a Schedule…” and click the “Perform Action” button. This will then bring up a guide to help you choose what type of task you would like to schedule. Simply follow the on-screen instructions to complete the setup of a backup schedule.
Testing the backup schedule. The easiest way to test the performance of the backup schedule is by configuring a backup to perform within a few minutes of the current system time. This way, we can see the process in action and then verify the activity of FileMaker Server by looking in the “Log Viewer” section of the Admin Console in the left pane.
Why not to use 3rd party backup software? It is not safe practice to rely on 3rd party software to backup the databases while they are live and in use. This can be severely detrimental to the stability of database files and may cause corruption or other irreversible damage.
Saving to an off-site location. An even more secure option for preserving data is saving backups to an off-site location. This is a more advanced method that an administrator can take to ensure the integrity of their databases.
Benefits of storing backups off-site. Security is enhanced as access to these backups can be restricted. Another advantage of saving backups in a remote location is that even in a worst case scenario where the database server crashes, the database files will still be safe and preserved in their original format.
Choosing an off-site location. The decision of selecting a host provider is dependent on the needs of the environment and also, of course, budget. There are many different providers offering this service and it might be best to consult other FileMaker users in the community for feedback and recommendations. The FileMaker forum (http://www.filemaker.com/forum/) is a great place to ask for advice, from FileMaker developers and users, which backup storage host may be best for you.
Saving to physical media. Some users prefer saving backups to tangible media as it feels more secure to have a copy that can physically be held as opposed to hosted preservation.
Benefits of storing backups on physical media. Backup files that are saved to physical media (CD-R, DVD-R, USB drive) allow the user even more control of where that backup is saved. Since the media can be taken anywhere, there is no restriction as to where it can be stored. Similar to saving the backups off-site, an administrator can be sure that the data’s integrity will not be interfered with as the backup will be completely isolated from variables such as hardware failure or tampering.
In addition to these benefits, the cost of backing up to tangible media is much more cost-effective than relying on an off-site host.
Choosing a media format to save to. The type of media that you can back up to depends primarily on two things: database size and hardware available. With such large hard drive space being available on USB drives, it’s almost inefficient to backup to a CD-R or DVD-R. But, everyone has their preference. There are USB drives available that allow for up to 1TB of storage. Whereas a CD-R has a maximum capacity of 700MB and a DVD-R has a maximum capacity of 4.7GB. A dual layer DVD-R allows for up to 8.5GB of storage, but most computers do not include optical drives that allow for write ability to a dual layer DVD-R.
Using a USB drive, you have one physical element that can be used and reused. It never has to be replaced and it is a one-time cost, unless the hardware component fails. This might be the only downside to using a USB drive, in that if the drive fails, all data stored on it could be lost.
Using CD-Rs and DVD-Rs, you may be burning and re-burning multiple copies, but you will not have to worry about the CD-R or DVD-R failing, barring any physical destruction that may happen to it. Since a CD-R or DVD-R is used each time a backup needs to be saved, blank media will have to be re-purchased once the initial supply has diminished
A server cluster is a group of two or more servers that are configured so that if one server fails, another server can take over application processing. The servers in a cluster are called nodes. Typically, these servers store data on a common disk or disk array.
Clustering software monitors the active nodes in a server cluster. When a node fails, the clustering software manages the transition of the failed server’s workload to the secondary node.
When a clustered Siebel Server fails, all the applications and services on the server stop. Application users must reconnect and log in to the server that takes over. For example, if the Siebel Server that failed was hosting Siebel Communications Server, the communications toolbar is disabled, and users must reconnect and log in to the new server.
Cluster vendors can validate their third-party server cluster products to provide server clustering for deployments of Siebel Business Applications. For validation assistance, contact your Oracle sales representative for Oracle Advanced Customer Services to request assistance from Oracle’s Application Expert Services. For recommendations and help on the use of cluster products with Siebel Business Applications, customers should contact the cluster vendor of their choice.
An active-passive server cluster contains a minimum of two servers. One server actively runs applications and services. The other is idle. If the active server fails, its workload is switched to the idle server, which then takes over application processing.
Because the standby server is idle, active-passive server clusters require additional hardware without providing additional active capacity. The benefit of active-passive clusters is that, after a failover, the same level of hardware resources is available for each application, thereby eliminating any performance impact on users. This benefit is particularly important for performance-critical areas such as the database. The most common use of active-passive clusters is for database servers.
An active-active server cluster contains a minimum of two servers. Both actively run applications and services. Each may host different applications or may host instances of the same application. If one server fails, its processing load is transferred to the other.
Active-active configuration is the most common server clustering strategy for servers other than the database server.
NOTE: Configuring the Siebel Database (database server) and a Siebel Server to failover to each other is supported, but not recommended.
Some Siebel Server components, such as Siebel Connection Broker (SCBroker), Siebel Gateway Name Server, Synchronization Manager (Siebel Remote), and Siebel Handheld synchronization listen on a configurable static port. When these components run in an active-active cluster, you must plan your port usage so there is no port conflict after failover.
For example, an active-active server cluster contains two platforms, each running a Siebel Server. If one platform fails, the other will host two Siebel Servers. Siebel Servers include several services, such as Siebel Connection Broker, that use a dedicated port. If this port number was the same on both platforms, there will be a port conflict after failover.
Active-active clusters use all the server platforms continuously. Consequently, they take better advantage of computing resources than active-passive clusters. When doing capacity planning, make sure that clustered servers have sufficient capacity to handle a failover. Because failovers are usually infrequent and normally last only a short time, some performance degradation is often acceptable.
Email archiving (also spelled e-mail archiving) is a systematic approach to saving and protecting the data contained in e-mail messages so it can be accessed quickly at a later date. In the past, companies often relied on end-users to maintain their own individual e-mail archives. The IT department would back up e-mail, but not in a manner that made messages searchable. If a specific e-mail needed to be traced, it often took weeks to find it. With today’s compliance legislation and legal discovery rules, it has become necessary for many IT departments to manage the entire company’s e-mail archiving in bulk so specific messages can be located in minutes, not weeks.
Policy-based e-mail archiving software applications allow IT managers to manage large e-mail archives, as well as to free up space on production servers and speed up backup times. These applications typically include indexing and search capabilities, access logs to provide a “virtual paper trail” in the event an e-mail is subpoenaed, and a lifecycle management component, which acts as kind of a traffic cop for all e-mail coming in to the company. The life cycle management component uses rules set up by the administrator. It will classify which e-mail messages need to be archived, migrate the messages to the most economical and efficient storage media, and automatically delete messages when they are no longer needed.
Many IT departments think disaster recovery (DR) and business continuity (BC) are the same thing. As a result, they tend to take a largely technology focus on the subjects.And that’s a problem, according to Michael Croy, director of business continuity at Forsythe Technology Inc., a Chicago-based IT consultancy and infrastructure firm specializing in BC and risk management.
Many people are still confused by the terms DR and BC, says Croy.It is critically important that the DR plan is based on a solid BC plan that has taken into account the reality of the business requirements for recovery. If the DR plan cannot meet the requirements of the business units, it is of no value.
Croy says business continuity plans touch all functions of a business — from personnel to facilities to IT. In terms of a hierarchical view, business continuity is at the top. Below it is the disaster recovery plan. And under that come technologies, such as enterprise backup, recovery and restoration.But true disaster recovery extends much more broadly than backup processes by using mirrored sites and replicated data to respond to an event. Similarly, business continuity goes well beyond disaster recovery by encompassing every aspect of company operations that could be impacted by a situation. Human resources, power supply maintenance or backup, transportation, food, health and safety issues all fall within business continuity.
The IT department with its disaster recovery plan is one element of a larger business continuity scenario.
John Glenn, a certified business continuity planner based in Clearwater, Florida, agrees that IT administrators need to take a wider view.
Most people, especially MIS/IT folks, think BC is just a new name for DR, says Glenn. The difference is that DR for IT focuses solely on IT, and what IT perceives as the business unit’s requirements. BC, on the other hand, should focus on the business units and, by extension, all the resources required by the business unit.
Industry observers say it’s clear that disaster recovery is one element of business continuity. While IT is junior to BC as a whole, the IT organization plays a central role in business continuity.
It’s a big mistake to think the IT department is the only department needed to develop, test and recover the business, says Gartner analyst Roberta Witty.It is advisable to form a business continuity program with a dedicated team of people with a senior management sponsor.
IT, though, would provide one representative to the core BC committee.
According to Witty, the committee would be comprised of anywhere from two to five members, depending on the size of the organization. This group would take a wide view of potential disasters.
For example, consider employee health and welfare during an event. In a regional outage, you can’t expect personnel to show up for business recovery if they are having serious problems at home related to the event. You must support them and help employees be better prepared at home for disasterous events. The American Red Cross, she says, can be brought in for this kind of training and awareness building.
Michael Gruth, head of system and network support at Deutsche Borse AG, the German exchange for stocks and derivates, says the IT staff tends to find it easier to relate to the hardware, software and networking components of DR. He has assembled an Alphaserver/OpenVMS cluster over two sites five kilometers apart. In the process, he discovered there is a lot more to DR than additional Alphas and switches.
Do not forget things like having an office at your mirror site for remote management, says Gruth. Also, don’t forget the human factor. While it may sound harsh to think about having additional employees to recommence business in the event of a tragedy, this is the reality we live in since 9/11.
To help IT come to terms with a broader scope than disaster recovery, some IT organizations are dropping the term in favor of business continuity.
We have gotten away from the term ‘DR’ as it assumes the facility is not available, says Jeff Russell, CIO of The Members Group, an Iowa-based company that provides card processing and mortgage services to credit unions. BC, on the other hand, deals with how we continue despite business interruption.
Disaster recovery projects can easily run aground or fail to be funded if they are done in isolation. Glenn says it is essential to begin every initiative from the business continuity perspective in order to give technology its correct business context.
To make his point about business continuity not being a matter of technology, Glenn enters the debate about what is the best platform for disaster recovery, or what technological elements are most critical. Should you use OpenVMS or UNIX, mirroring or disk-to-disk backup, SAN or NAS, or all of them? Glenn cuts through the complexity and vendor hype with a simple answer.
My number one DR or BC technology is pencil and paper, he says. Seriously, it’s not about platforms or technologies.