5.3 system security Flashcards

1
Q

OS hardening

A
  • We start with the discussion of OS hardening. Most operating systems have far more services and ports enabled by default than are needed. OS hardening is the process of getting rid of as much of that as you can. You need to evaluate each system individually because each system has different requirements. You need to leave different things enabled on a web server than on a mail server, for example. Also, be aware that operating systems have dependencies. If you remove service X because you don’t think you need it, you may break service Y that you do need. When you know the minimum level of services required for that system to do what it needs to do, you disable everything else. After you disable something, you then delete it from the system completely. By deleting the service, you don’t have to worry about keeping it patched or that some piece of malware might re-enable it.
  • Some systems have various management interfaces and/or applications. Two examples include the Remote Desktop Protocol and the Remote Registry Editing services in Windows. If these and similar services are not going to be used to administer the system, they should be disabled and removed from the system. Each one that is required should be secured as well as possible. In particular, ensure that only authorized system administrators can use them
  • The system should have accounts only for authorized users and administrators. In particular, any “guest” accounts should be removed if at all possible. Some versions of the Windows operating system do not allow you to remove the guest account. In this case, set a massively complex 127-character password (the longest Windows allows) and then disable the account. This way, if malware or an attacker succeeds in reenabling the guest account, they still have to defeat that password to use it.
  • For the operating system to be considered secure, you have to secure the applications running on that OS. The exact same principles apply here as with the OS. If you don’t need it, uninstall it. If you do need it, ensure it runs with the minimum possible permissions. Ensure only personnel who should be using that application can use it. If there is an available security feature, enable and configure it, even if you are not sure how it can help you
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Patches

A
  • You cannot hear this too many times: Keeping computers (both the OS and applications) up-to-date with patches is one of the most important things you can do today. Doing so is important for both security and usability/stability.
  • On Windows, use the built-in Windows. Microsoft has changed their approach to this dramatically in the last few years. It is far more difficult to put off updating the Microsoft operating system. In fact, for home users, you really can’t put off updates. * Feature updates are larger and released twice per year. These may include new applications or other significant changes to the user’s graphical interface. Feature updates require much more testing. In the past, feature updates were called “service packs”. * Quality updates are smaller and released monthly. These are not supposed to include any new applications or significant changes that the user would notice (though they sometimes do). Quality updates still require testing before deployment, but not as much as feature updates. In the past, quality updates were called “hotfixes” or “patches”.
  • Automatic updates are a double-edged sword. On the one hand, it is good that patches are applied quickly. On the other hand, it can also be a bad thing. Sometimes, patches can cause systems to become unstable and crash. Testing updates is necessary. For the home environment, automatic updates are probably fine. For production environments, you need to test patches before applying them, especially on servers. Linux and Mac operating systems also have automatic update capability. If fact, so long as you have Mac’s ”Gatekeeper” feature enabled, you can only install software digitally signed by Apple. Apple will not digitally sign software unless it has an automatic update feature enabled by default
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Virtualization

A
  • To understand virtualization in computers, one must first answer the question, “What is a virtual computer?” Looking at the definition of the word “virtual” is a good place to start in answering that question. We might define the word Virtual as: Appears to exist, but does not physically exist We would therefore define a Virtual Computer as: A computer that appears to exist, but is not a physical computer. Instead, the computer is made by software to appear to exist. Some understand it better by noticing the synonym “simulated.” So, one could say that a virtual computer is a simulated computer
  • So, to say it another way, in that screenshot, you see a total of four computers running. What this looks like from a slightly more technical perspective is depicted in the diagram on the left-hand side of the slide. There you see the computer at the bottom, with its hardware (CPU, HDD, Monitor, etc.). That is running the host operating system (Windows, Mac, Linux, etc.). The Operating System is running the virtualization software, which includes the all-important hypervisor. The virtualization software creates fake (or virtualized) hardware for each virtual computer. If you look at each of those virtual computers’ configuration, each will show its own, dedicated hardware. For each virtual machine, the settings will show a NIC, a video card, a hard disk drive, and so on. Again, all of that hardware is fake and being virtualized by the virtualization software, but the guest operating system does not know that, they think they are running on their own dedicated hardware
  • Good hypervisor = virtualization – bad hypervisor = bad virtualization
  • A bit of trickery has to be going on in the background to make virtualization work. Previously, we stated that the guest OS wholeheartedly believes it is running on dedicated hardware. As far as that OS is concerned, there is a physical hard drive, video card, NIC card, CPU, and all the other components that a physical computer has.
    o In reality, those components are shared with that guest OS, any other running guest operating systems, and the host OS. The magic that makes this work is called the hypervisor. When the guest OS needs to write something into RAM, it uses the same system call that it would typically use to do so. When it sends the RAM write request to its “hardware,” the hypervisor intercepts the write request and redirects it to the real physical RAM to be written. The same holds true when the guest OS needs to display something on the screen or any other function: The hypervisor intercepts the request and redirects it to the real, physical hardware. On the actual physical hard drive of the computer is a large file, usually with a (dot)VMDK file extension. When the guest OS needs to read from or write to its hard drive, it does so in the standard way. The hypervisor intercepts that access to the hard drive and redirects it, reading or writing with the (dot)VMDK file
  • The ROI makes this popular :
    o Less power, less cooling, less floor space
    o In data centers 1000 physical servers need enough electricity to power them, you need enough head, and enough cooling, where the costs add up
    o Big $$$ savings in data centers
  • You can run one to thousands of VM on a signle house – you just need the hardware resources to handle it
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cloud computing

A
  • With an understanding of virtual computing, we can move into a discussion of cloud computing. The first question to ask here is, “What is a cloud?” According to Google’s online dictionary, cloud computing is, “the practice of using a network of remote servers hosted on the internet to store, manage, and process data, rather than a local server or a personal computer.”
  • You can do virtualization without cloud, but you can’t do cloud without virtualization since it’s much much cheaper
  • Almost without exception, those remote servers are virtual computers in the data center of a cloud provider (or possibly the company’s own data center). On this slide and the next several that follow, you will see pictures of equipment racks in data centers. Physically, this is what “the cloud” looks like. Large cloud providers such as Amazon, Microsoft, Google, Rackspace, and so forth have data centers containing thousands of these equipment racks. Each of those equipment racks could be running 100+ physical computers. Each of those physical machines could be running over 1,000 virtual computers. If you do the math, each of those thousands of equipment racks is running over 100,000 virtual computers. That is the real cloud
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Software-as-a-service

A
  • Software as a Service (SaaS) is probably the most common form of cloud computing. You have probably used it on many occasions, even if you did not realize you were doing so. For example, if you have a Gmail, Hotmail, Yahoo! Mail, or any other online mail service, you have used SaaS. Here, the cloud provider (or provider of service) makes its software available to you, most commonly through a browser interface. You interact with the software in a thin-client model. In other words, little or nothing is stored on your local hard drive. For example, if you log in to your Gmail account and see there are 1,000 email messages in your inbox, you can click those messages to read them, reply to them, and so on. But those email messages are not stored on your local computer. In the case of Gmail, they are stored on servers in Google data centers. Only when you download an email attachment is anything stored locally
  • The bad news: In SaaS situations, the customer has little control over any part of the experience. If Gmail (continuing with our earlier example) decides to make a change to its interface, you have to use that change. In other words, you use the service today and it looks one way; when you use it tomorrow, it might look completely different. The good news: Your IT staff does not have to devote any resources to maintaining an email system (or whatever service is delivered via SaaS).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Platform as a service

A
  • The Platform as a Service (PaaS) solution is typically a service purchased by a corporate IT department. Instead of having their own computers in their own data center, they purchase computers from a cloud provider. In reality, they are almost always actually purchasing one or more virtual servers. They tell the cloud provider what operating system they would like (Windows or Linux, typically). The cloud provider sets up the virtual server and grants the customer IT staff access to it.
  • The IT staff then manages the system remotely just as it would if it were located in its own data center. It installs software, configures services, and so on. The cloud provider is responsible for keeping the system running and usually for doing things such as keeping OS patches in place. The corporate IT staff is responsible for maintaining whatever applications it uses for the system to run. Here, the corporate IT staff has more control than with SaaS solutions, but not as much control as it would have over its own server. For example, it may not dictate how quickly or even if certain OS patches will be applied.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Infrastructure as a service

A
  • Infrastructure as a Service (IaaS) is sometimes referred to as a “server in the cloud.” The cloud provider sets up the hardware (typically actually a virtual machine). The cloud provider in most cases also installs the base operating system of the customer’s choice (Windows or Linux). From there, the cloud provider is responsible for providing the power and cooling to keep the system running and the internet connectivity so that it can be accessed. Nothing more
  • The corporate IT staff that has purchased (or more accurately, rented) these servers perform all maintenance tasks. This is truly the same thing as an IT staff maintaining its own servers in its own remote data center (common). The only difference is that it doesn’t happen to own that data center and it doesn’t actually own the servers. The IT staff has the greatest degree of control but also the greatest degree of responsibility for maintenance. That is the biggest difference between Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). How much control does the IT staff want to retain for themselves? More control usually means greater flexibility, but it also means greater responsibility for keeping things up-to-date and running properly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Cloud security alliance

A
  • For a very long time, there was little or no guidance regarding cloud security. That has now changed. The Cloud Security Alliance is a multi-national organization with over 100,000 members and nearly 30 working groups. They have now published their Cloud Controls Matrix (CCM) which is available at the link: https://sec301.com/CCM When you download the document, you will find extremely detailed guidance to create more secure cloud operations. This is valuable to cloud providers as well as customers. The CCM is broken into 17 primary domains. Each domain is broken down into individual controls – over 200 controls in all. As we say, it is very detailed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Shadow IT

A
  • This is a topic that has truly exploded in the Cyber Security scene in the last few years. Any time an employee is utilizing any type of IT resource that is not authorized by the organization’s IT department, you refer to it as shadow IT
  • Unauthorized Internet connections: Check with the accounting department. If the organization has two authorized Internet connections, but accounting pays seven Internet Service Providers per month, you have shadow IT in your midst. This often happens in organizations with stringent change management and/or stringent firewall rules. A manager asks for access to something and is denied. The manager then utilizes their own budget to purchase access. Note, this gets much harder to find when the Internet access is charged to the manager’s own credit card, then submitted for reimbursement. In either case, accounting records are your best shot at discovering this.
  • Hardware: Once you find the unauthorized Internet access above, you might want to find out what hardware is connected. Did the manager go purchase their own computer servers and connect them to the Internet? Keep in mind, if the company runs on a 10-address space, but the unauthorized equipment is on a 192- address space, it becomes much more difficult to discover
  • Software: It was not all that long ago that obtaining software meant going to the store to purchase a physical box with disks inside. Today, both commercial and free software are an easy and quick download away. Unless you have user-permissions locked down to prevent software from being installed, you will have unauthorized software. It may be on the unauthorized hardware above, or it may be on hardware the IT staff knows about. In either case, it can be a problem.
  • Cloud Services: Finally, we get to the biggest shadow IT culprit in recent years, Cloud services such as Dropbox, Google Drive, and a long list of others. Remember our discussions from earlier about losing legal protection for proprietary data if you don’t protect it? If an employee uploads proprietary information to a cloud drive that does not encrypt both in transit and at rest, you have almost certainly lost legal protection for that data. If your organization does not utilize a cloud storage provider, Dropbox, for example, you can control this by blocking access to Dropbox at the firewall (or using a sinkhole, etc.). But what happens if the organization has an authorized Dropbox account? Then you cannot block Dropbox at the firewall, it must resolve correctly via DNS, etc. If an employee decides to use their personal Dropbox account, that is much, much harder to control. Also, note that the employee can upload data to most cloud storage locations via a browser, so preventing software installation does not help either
  • Do note there are those that believe shadow IT is wonderful from a productivity perspective, and it can be. Yes, we made a big point in our discussion of the Principle of Lease Privilege saying that, “Everybody can do everything they need to be able to do” and that, “security cannot get in the way of the mission”. There are those that argue that you violate those principles by shutting shadow IT down. This is not the case. The key word here is “unauthorized”. If you cannot provide enough justification for something to be approved, then it is unauthorized. Moving ahead blindly from there is anti-mission. A, “damn the torpedoes, full speed ahead” approach is just as wrong as security getting in the way of the mission.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Validated backups

A
  • A backup is a copy of information that is stored on media separate from the original.1 If the media on which the original is provided fails, the backup can be used to restore the information. Note that a copy is not necessarily the same as a backup. If the copy is on the same media as the original, it is a copy (but not a backup). If the copy is on different media, it is a backup. Original information can be lost in a variety of ways. Users can accidentally delete files, data on media can become corrupted, and disk drives can become inoperable. Prudent practice takes this into consideration and provides for recovery in the event of failure.
  • Although it can be easy to say, “back up your data,” it’s important to implement a mechanism that is easy for users to execute. Telling users to back up their information from their workstation (or laptop) to the server and not providing a mostly transparent mechanism for doing so is an invitation to disaster.
  • Validated backups should be done automatic – including an automatic verification of data integrity.
  • It must also have periodic test recovery – if there is not a validated recovery – it is not a backup
  • There are three basic types of backups: Full, incremental, and differential. Each involves different tradeoffs in terms of speed and recoverability
    o A full backup is just what it sounds like: A complete dump of all the data on a device or system. Full backups are the easiest to understand conceptually. If you want to back up your C: drive, dump it all out to a network tape drive or a few DVDs
    o We should distinguish the difference between a full file system backup and a disk image backup. When you back up a system using a file system backup, the backup software traverses all the directories and files in the file system and systematically copies them to the backup location. What you get in the end is a copy of the file structure of the original device and all the information contained in those files. In contrast, a disk image backup examines each sector of the physical disk and copies them intact to the backup media. What you get in the end of a disk image backup is a sector-by-sector copy of the original device. This may include all unused sectors on the disk, as well as any slack space data that the drive may contain
    o Most production data center environments use the file system backup process, as they typically need to restore files and directories to recover lost data, and the device where the restoration takes place may have different physical properties than the device from which the backups were taken. If that is the case, the number of sectors and their location on the target will be different than that of the original device, making a sector-based restore difficult, if not impossible. Disk image backups are typically used when it is important to understand the precise physical state of the device, including unused sectors, such as in the case of a forensics investigation. It can also be a quick method of duplicating one device to another if the two devices share the same physical characteristics.
    o The advantage of full backups is that creating and tracking them is straightforward, as each set of backup media has a complete system image and gets labeled with the date the backup was taken. Taking a full backup might take quite a while, depending on the size of the system being backed up. When you need to restore your system, you determine the date you want to restore from, grab the tapes or disks from that day’s full backup, and restore the data, in full, onto the system. The disadvantage that full backups have is that they can potentially use up huge amounts of backup media
    o Enter the incremental backup. Incremental backups take advantage of the relatively small number of files that actually change on a typical system during normal usage. The typical incremental backup method starts by taking a full backup of the system on a Sunday evening (for example). Then, for each of the following days (Monday through Saturday), an incremental backup is taken. On the following Sunday, another full backup is taken, and the cycle starts over again.
    o The biggest advantage to the incremental system is that it saves a great deal of backup storage space. The full backups take a lot of room, but the incremental backups take only a small amount of space to keep up with the changes. The disadvantage of the incremental system is that it can take a long time to restore the data you need. If you take your full backup on Sunday evenings but you want to restore the system as it appeared on Sunday morning, you need to restore a full week’s worth of tapes before you are through. You start by restoring the previous Sunday evening’s backup, and then successively restore the backups from Monday evening, Tuesday evening, and so on through Saturday evening. That’s seven rounds of restores! But that’s the trade-off you make for using up as little backup storage space as possible
    o A good compromise that combines the speed of full backups and the minimal storage space usage of incremental backups is the differential backup. With differential backups, you back up only files that have changed since the last full backup. This uses more backup storage space than the incremental method, but when it comes time to restore a system, you make up for it in speed
    o As with most security issues, which method you use is a trade-off between cost, speed, and convenience. If you need complete images of a system each time you back up, and you don’t mind the extra system downtime that will take, use full backups. However, you pay for it in storage media costs. If saving money on storage media and minimum system downtime are the most important, and you don’t mind spending some extra time restoring data, go with the incremental system. If speed of restoration is a priority, and you are willing to sacrifice some downtime and spend a little extra on storage media, the differential system is a good compromise.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Backing up systems. applications, and data

A
  • Both Windows and Mac include a feature that performs a full system dump. In other words, it is not just a backup of your data – It is a backup of your data, your operating system, your applications, system settings, and so on. It is a full replication of your computer and everything it contains written to a directly connected external hard drive. On Windows 10, this is called “Backup and Restore”. On Mac, it is called ”Mac Time Machine”. In the case of Mac for example, it performs full system images to the external drive hourly for 24 hours. It then keeps one of those as daily backup for a month, and one of those as a weekly backup for as long as the drive space allows. Once you run out of space, it deletes the oldest backup on the drive. In IT, we call this first-in-first-out or FIFO
  • Why this is important: A while back, the course author was updating his Mac computer to a new version. For some reason, the update failed, and his computer would no longer reboot. He used the built in Mac Time Machine utility to restore his system to a point two-hours in the past. The utility pulled the system image from two hours prior and rewrote everything on his hard drive. Essentially, he rolled the computer back two hours in time. It was like the attempted update never happened. He tried the update again and it worked perfectly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Backup locations

A
  • Let’s assume you have a backup strategy, and perhaps you’ve even started doing a few backups. Now, where do you put all the tapes you are generating? Here, you have several options, again each with its own benefits and trade-offs. The initial inclination will be to store all the tapes at the location where the systems are so that storing and retrieving the tapes can be done quickly, facilitating quick recovery in a disaster. Unfortunately, if you take backups long enough, you may run out of enough available physical storage capacity to store all your data. In addition, if you have a disaster at your site (such as a fire or flood), your backup tapes may be lost along with the rest of the site. As backups are a primary disaster recovery mechanism, their loss in such a disaster would be ironic, to say the least. To counter the threat of loss due to a disaster, many organizations choose to store backup tapes off-site, either at another location owned by the organization or at a commercial storage facility. Storage at another organization-owned location has its advantages, including cost and availability of space. Commercial sites offer such features as climate control, advanced fire detection and suppression, and enhanced security. Of course, all these benefits come at a price. There are a couple of considerations when you start to consider off-site storage locations. The first is the distance between the primary site and the backup location. A natural thought would be to select a backup site close to the primary site, as that facilitates quick retrieval of the tapes in the case of a disaster. However, if the disaster is regional in nature (for example, a hurricane or regional power outage) a backup site in the same city or state may be facing the same disaster as the primary site
  • Conversely, storing the backup tapes in another state or region lowers the risk of a common disaster but raises the cost of transporting tapes to and from the backup site. Then you must also consider the transportation method for getting the backups from the storage site to the backup site. For example, if you store your backup tapes at a commercial storage facility several states away and rely on an overnight courier to get them to the backup site, what do you do in a 9/11-like event where air traffic has been stopped? This may seem like an extreme example, but it highlights some of the logistical issues off-site storage may bring. The use of high-speed data networks for over-the-wire backup alleviates this concern somewhat, but at the cost of a slower backup and restore process due to line speed limitations. One good compromise is a hybrid approach in which backups are stored at different locations depending on their age. The most recent backups (for example, tapes from the past two weeks or month) are stored in a safe facility on-site (perhaps a specially constructed room with tight security and enhanced fire suppression). This gives the organization quick access to the data that is most likely to be needed in a hurry. (“Oops! I just accidentally deleted the spreadsheet I’ve been working on all week!”) Tapes older than that are shipped to an off-site facility for long-term storage because the company is less likely to need that on a regular basis.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Cloud-based backup

A
  • Cloud-based backup is becoming extremely common. There are several high-quality services available for purchase. This type of backup is even built into the most recent versions of Microsoft’s server platforms. The idea is very simple: You send your data across the internet and store it on the cloud company’s hard drives. This requires you to use the cloud provider’s application for making backups. So, although you own your information, you do not own the application. This is especially important because it means you also have to use the provider’s software to retrieve your data. If you decide to change backup companies, the provider could potentially block you from accessing your own data. Also, if the provider goes out of business, your data could become completely inaccessible. Choosing providers wisely is vitally important
  • Last but not least, you are putting all your data in someone else’s hands. Assuming the data in question is even a little bit sensitive, you really need to use zero-knowledge systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Zero knowledge and cloud storage

A
  • When you put sensitive information into cloud storage, it is obviously important to protect that information. Several providers now provide “Zero-Knowledge” implementations. If you set this up correctly (and use a good, strong passphrase), it can be highly secure. The section numbers below match those on the slide above.
    o Zero knowledge : if the system is set up in such a way even though you’re putting your data in cloud companies systems they have zero knowledge of your data in an unecrypted form
  • SECTION 1: On your local computer, you enter a good, strong passphrase. Software on your computer takes that passphrase and runs it through an extremely complex process called Password-Based Key Derivation Function 2 (PBKDF2). (The detail of PBKDF2 is far beyond the scope of this class, but just understand that it takes a passphrase and derives an encryption key from it.) In most implementations, the passphrase runs through the PBKDF2 process 5,000 times (each iteration makes the resulting key more random). The end result is an encryption key.
  • SECTION2: You then create a document of some type. When you save that document, the passphrasederived key is used to encrypt the document. The encrypted form of the document uploads to the cloud storage provider.
  • SECTION3: On some computers with the zero-knowledge application installed (perhaps the same one, perhaps another one), you click to open the file stored in the cloud. The software prompts you for your passphrase. You enter the same passphrase you entered in section 1. That passphrase runs through PBKDF2 the same number of times as before and generates the exact same encryption key. The encrypted file downloads, and the key decrypts the file.
  • NOTE: Neither the key nor the passphrase used to create it is ever in the possession of the cloud provider. They CANNOT access your data. They have “Zero Knowledge” of your data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Versioning backups and cloud backup

A
  • It has always been a good idea to use versioning backup services. With the advent of Ransomware, the use of versioning backup is now critical! Note that the section numbers below match those in the slide. Instead of recovering the recent version that’s hit with ransom, you recover the latest version before that and you get your data back without payign the ransom, this is the best defence you ahve against ransomware
  • SECTION 1: You create a document on your local computer. When you save that document, it is saved to your local hard drive. In addition, backup software you have running on the computer also places a copy of that file in the cloud provider’s storage (vital—this needs to happen automatically).
  • SECTION 2: At some point, you edit that document. When you save the changes, the new version is written onto the local hard drive and a copy goes into cloud storage. Notice that the cloud provider’s storage system now contains two copies of your document: The one you originally saved and the new version. (Some cloud providers offer unlimited versions. Others keep a subset, for example, the last five versions, or the last 90 days’ worth, etc.).
  • SECTION 3: Let’s say that at some point, you get hit with Ransomware. The Ransomware encrypts all of your files, including the one in our example here. The encrypted version of this file will back up to the cloud provider’s storage. However, because of versioning, your prior, non-encrypted versions of the document remain in the cloud provider’s storage as well. Once you clean the Ransomware off your system, you can simply download the most recent prior version, and you have your data back in a non-encrypted form.
  • Note: You can, of course, combine versioning and zero-knowledge for highly secure and durable backups.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Keith’s secret sauce

A
    1. When he saves File1.doc on the silver laptop, it automatically backs up to Sync.com.
    1. Sync is a service that offers unlimited versioning with Zero Knowledge—meaning that every version of File1.doc is kept there, and all of them are fully encrypted in such a way that Sync cannot see the contents.
    1. Sync automatically replicates the file to the grey laptop, also in the hotel.
    1. Sync automatically replicates the file to the Windows desktop (in the home office).
    1. Sync automatically replicates the file to the Mac desktop (in the home office).
    1. As soon as the file is on the Mac desktop, it is backed up to CrashPlan which is another company that provides unlimited versions with Zero Knowledge.
    1. The author also has external USB drives connected. Within an hour, Mac Time Machine will create a full system image of the silver laptop. The image contains the OS, software, settings, and all data on the laptop including File1.doc.
    1. There is an external USB drive connected to the grey laptop, so within an hour, there is a full system image on that drive as well.
    1. The Windows desktop has an external USB attached, and within an hour, Windows Backup and Restore with File History creates a full system image of that computer including the OS, software, settings, and all data.
    1. The Mac desktop has an external USB attached and Mac’s Time Machine creates a full system image of that computer every hour as well.
  • So within seconds of saving File1.doc, it resides in six locations. Two copies are with the author in his hotel room, two are on computers in his home, and two are in the cloud. Within an hour, four more copies are made of the file as part of full system images.
  • It should be noted that a backup is a copy of your data and must also be protected. That is why Drives 1, 3, 4, 5, 7, 8, 9, and 10 are fully encrypted. Even if an external USB is lost or stolen, without the very complex passphrase needed to decrypt it, the data cannot be recovered. Of course, with both cloud providers providing Zero Knowledge, no one without the different complex passphrase can recover the data from there either