User Käyttäjä Salasana  
   
sunnuntai 28.9.2025 / 18:07
Hae keskustelualueilta:        In English   Suomeksi   På svenska
afterdawn.com > keskustelu > yleistä keskustelua tietokoneista > ajuri- ja softaongelmat > ehdotuksia järjestelmän eheytykseen?
Näytä aiheet
 
Keskustelualueet
Keskustelualueet
Ehdotuksia järjestelmän eheytykseen?
  Siirry:
 
Kirjoittaja Viesti
Naivor
Newbie
_
21. tammikuuta 2009 @ 11:50 _ Linkki tähän viestiin    Lähetä käyttäjälle yksityisviesti   
En onnistunut löytämään topiccia josta olisin voinut napata uuden ohjelman järjestelmän eheytykseen.

Yritän siis löytää uutta järjestelmän eheytykseen tarkoitettua ohjelmaa, perfect disc kun päätti olla toimimatta uudelleenasentamisenkin jälkeen. Googlellahan löysin joitain versioita erilaisista ohjelmista, mutta ajattelin kysyä ensin täältä.

Auslogics Disc Defrag olisi yksi softa mitä meinasin kokeilla, ole kuitenkaan varma että onko toi sitten yhtään parempi kuin Windows XPn oma defrag utility.

Ehdotuksia otetaan vastaan, käyttöjärjestelmä on Windows XP. Ei mitään väliä onko ilmainen vai ostettava, kunhan löytyy henk.koht. käyttökokemusta.

Kiitos jo etukäteen.
AfterDawn Addict

5 tuotearviota
_
21. tammikuuta 2009 @ 12:25 _ Linkki tähän viestiin    Lähetä käyttäjälle yksityisviesti   
XP:n oma, varsinkin kun eheytyksellä on aika minimaalinen vaikutus mihinkään.

Edit: Juttua ei olekaan enää saatavilla, mutta minulla on se tallella:
Lainaus:
Disk defragmentation -- how beneficial is it really?

For years, administrators have been hearing stories about whether or not they need to defrag their hard drives. How big of a problem is fragmentation really? How beneficial is disk defragmentation? Desktop management expert Serdar Yegulalp explodes some myths and defies some conventional wisdom about disk defragmentation in the tips below.

Disk Defragmentation Fast Guide

Introduction
Disk defragmentation: Performance-sapper or best practice?
New hard disk drives reduce need for disk defragmentation
Four steps to lessen the effect of fragmentation
Flash memory drive defragmentation: Does it make sense?
Three disk defragmentation issues defined


Disk defragmentation: Performance-sapping bogeyman, or best practice?

Disk defragmentation has become one of the big performance-sapping bogeymen of Windows systems. Like kudzu overgrowing the landscape, it can never be completely eliminated -- just kept at bay.

At least, that's the conventional wisdom.

Over time I've grown curious as to how much of the conventional wisdom about fragmentation—and defragging—is true. To that end, I set out to examine the subject with a fresh eye and find out just how much of a problem file fragmentation is, as well as how much of a benefit disk defragmentation is.

What is file fragmentation?

Let's begin by determining what fragmentation is. A file is stored in a file system as one or more allocation units, depending on the size of the file and the size of the allocation unit on the volume in question. As files get written, erased and rewritten, it may not be possible to write a file to a completely contiguous series of empty allocation units. One part of a file may be stored in one part of a disk, the rest of it somewhere else. In extreme cases, it may be scattered quite widely. This scattering is called file fragmentation.

The more fragmented a file, the more work the computer has to do to read it. Usually, this comes down to how fast the hard drive can seek to a specific sector and read the allocation units in question. If the computer has to read several fragmented files at once, the number of head movements and the amount of contention for disk access will go up -- and things will slow down.

File fragmentation became widely recognized as a problem in the early '90s. Hard disk drives were relatively small and slow (compared to today), and so fragmentation had a correspondingly big impact on system speed. To address the problem, Microsoft shipped DOS 6.0 with the command-line DEFRAG utility (licensed from Symantec). Some editions of Windows licensed a stripped-down version of what is still the most popular third-party disk defragmentation program for Windows: Executive Software's Diskeeper. Depending on their budget, as well as their inclination, most admins use either Diskeeper or Windows' native Disk Defragmenter tool. Either way, the prevailing sentiment is to defrag early and often.

Most of the conventional wisdom regarding defragging evolved from the way storage worked in the early 90s. The file systems used on PCs then were FAT12 and FAT16, holdovers from the days of MS-DOS. When Windows 3.1 and Windows 95 appeared, the limitations of FAT12 and FAT16 came to the fore. Neither file system could support volumes more than 4 GB in size or file names longer than eight characters. Both were notoriously prone to errors and data loss and were particularly prone to fragmentation.

Dump the FAT and move to NTFS

Microsoft introduced FAT32 as a way around some of these issues. But the best long-term solution was to dump File Allocation Table (FAT) entirely as a system-level file system and move to NTFS, a more robust file system that had been in use for Windows NT for some time. One of the many improvements NTFS provided was a reduced propensity for fragmentation. It doesn't eliminate file fragmentation entirely, but it does guard against it much better than any version of FAT ever did.

Today, no Windows system comes shipped loaded with anything less than Windows XP or Windows Server 2003, and the drives are almost always formatted with NTFS. So much of the impact of file fragmentation is lessened by NTFS' handling of the problem.

Nevertheless, file fragmentation can still create problems in NTFS. One of NTFS' quirks is that much of the metadata on a given volume is stored as a series of files, hidden to the user and prefixed with a dollar sign, e.g., $MFT for the volume's Master File Table and $QUOTA for the volume's user-quota data.

This allows NTFS to be extended quite elegantly in future iterations: If you want to create a new repository for metadata on a volume, simply create a new metadata file, and it should work with a high degree of backwards compatibility. In fact, NTSF 3.0 implemented new file-system features in precisely this manner, without completely breaking backwards compatibility with older Windows systems. (The one caveat: You couldn't perform a CHKDSK operation on an NTFS 3.0 volume if you were running a version of Windows that didn't support it completely.)

The downside of this mechanism is that the metadata itself can become fragmented, since it's stored as nothing more than a file. So the real issue with file fragmentation today (at least on NTFS) is not so much that individual files become fragmented, but that larger structures become fragmented, for instance, the file in a given directory, or the metadata used by NTFS. Does this, then, affect performance in a way that file fragmentation of files themselves might not?

An upcoming article on this subject will show how the way hard disk drives work has significantly changed the way file fragmentation works, as well as how we deal with it.


New hard disk drives reduce need for disk defragmentation

Fragmentation is the demon that haunts hard disks for all users of Windows (and other operating systems). Defragmentation is the exorcist for that demon. After closely examining the subject, I've found that many of the things we held to be true about both fragmentation and disk defragmentation have become less and less true over time.

The NTFS file system in Windows has helped alleviate many of the worst problems associated with fragmentation. However, it has not cured them. In my first article on the topic, I hinted that that NTFS might still suffer from serious problems because of fragmentation of the file system metadata.

However, file systems are not the only things that have changed in the last ten years or more. Hard disk drives have changed too.

In 1996, a new Western Digital hard drive holding 1.6 gigabytes (what you could expect to fit on a decent-sized flash drive today) cost $399. That drive was in the same 3.5-inch form factor used for today's hard disk drives, so the size of the platters in the drive were pretty much the same.

Today, a new Western Digital drive that costs $199 holds 500 GB. So for half the money you're getting more than 300 times more capacity.

But there have been other changes. The platter sizes of drives have remained the same, but the rotational speeds of the platters have gone up—to as much as 15,000 RPM on high-end drives. But the biggest change in hard drive technology over the past decade has been in information density. With so much more information crammed into the same space, the drive heads move that much less to read that much more data. This in itself speeds up data transfers.

Most hard disk drives now also come with an on-board cache of RAM, usually 16MB or more. Data can then be read from the disk into the drive's cache in one big chunk, and parceled out to the host computer as needed. This saves the hard drive from having to go back and dip repeatedly into the same parts of the drive for multiple bits of data. On top of everything else, the operating system does its own caching to further alleviate the affects of a fragmented file—or even fragmented NTFS metadata, which tends to be cached and held as needed.

What does all this mean? The big disadvantage of fragmentation – that it scatters data across a drive -- has been greatly offset by all of these changes. Furthermore, 50MB of data fragmented across a ten-year-old hard drive will have a far bigger impact on performance than the same 50MB fragmented to the same degree on a new hard disk drive. Now factor in the benefits of caching, faster and smaller head movements, modernized file system storage and OS-level caching, and it becomes clear why fragmentation isn't anywhere near as bad as it used to be. (The one exception to this scenario is if the file is scattered literally all the way across the surface of the drive, which is fairly unlikely.)

While researching the topic of disk defragmentation, I talked to several experts on the subject. One of them was Mark Patton. Today he is development manager of CounterSpy Enterprise from Sunbelt Software, but years ago he worked on Executive Software's Diskeeper, the disk defragmentation software now used in a stripped-down version in Windows XP. Here's what he told me:

"Larger and faster drives have minimized the impact of fragmentation. The Windows file system tends to fragment files all on its own to a small degree, but fragmentation starts for real when the drive starts to get full—as in over 70%. "As the drive fills up, the larger areas of free space become scarce and the file system has no choice but to splatter large files around the disk. As the drive gets really full (over 90%), the file system then starts to fragment the MFT and the Pagefile. Now you've got a full drive, with lots of fragmented files, making the job of the defragmenter nearly impossible because it also needs space to do its job. "It is my opinion that a drive that is more than 80% full is not defragmentable. You can see that effect with huge hard disk drives, since they generally use smaller percentages of the drive's total free space. A side-effect is that the overall fragmentation is reduced, and the fact that these drives have faster seek times makes the effect even less noticeable.

"At the time I worked on Diskeeper, I always told people to 'defragment early and often' so that they could take advantage of the free space before their drive starts to fill up. This way, they could see a marginal improvement now, but, more importantly help, the defragmenter from getting log-jammed later on. With today's large drives, even this is not an issue."

In short, many of the issues that crop up tend to be because of a lack of free space causing cumulative problems. With larger drive sizes, faster access times and smarter handling of storage, the total impact of fragmentation has been reduced. That being said, defragging still has a place. My next article on this topic will include some recommendations about how and when disk defragmentation should be done in the real world.

Fragmentation hasn't been completely eliminated as an issue, but, thanks to changes in file system and hard disk drive technologies, it affects us much less than it used to. Consequently, disk defragmentation — the cure for fragmentation— isn't needed as urgently as it used to be either, although it still plays a role for IT administrators.

An expert on this issue is Mike Kronenberg, author of a disk defragmentation tool that was part of a suite of utilities originally published by the PC Utilities Mijenix division of Ontrack Data International, but which is now owned by VCOM. Kronenberg takes issues with some of the claims made by current vendors of defragmentation software.

"I challenge any defrag company to prove that, on a modern 2006 large drive about 50% full, defragmenting files will increase performance in any way that will be sensed by a user," says Kronenberg, noting that users are unable to tell the difference between Word loading in 6 seconds or 6.2 seconds. "Basically, nowadays defragmenting files will only provide a moderate performance boost when a drive is relatively full. Modern computers come with 250GB and larger drives that most people will never fill up."

According to Kronenberg, the amount of available free space on the drive determines how effective disk defragmentation is. A disk defragmenter can only work when there is free space available to move things around. The less free space, the more hobbled the disk defragmenter will be at doing its job. In addition, more free space on a drive means that there is more room for the file system to write out files and directories without fragmentation in the first place.

How to lessen the effect of fragmentation

So what are the best things to do to ameliorate the effects of fragmentation? Since most of the problems with fragmentation, as it pertains to modern hard disk drives, revolve around the amount of free space available, most strategies involve managing or making the best use of free space. Here are four steps to follow:

1. Defragment early. When you set up a new hard disk drive, whether as a data drive or as a new system drive, defragment it as soon as the first wave of installing applications, copying data, etc. is done. This allows the defragmenter to do a fair amount of work while there is still plenty of free space on the drive for that work to be done.

It also allows for some key file system structures to be written out in contiguous order, such as the swap or paging file, or the file the system uses to hibernate the computer. (Note: These files can be deleted and recreated if needed, but few people are in the habit of checking to see if they need this done, or doing so.)

2. Free some free space, or add some. If you're using a hard disk drive that's more than 75% full and remains that way consistently, it will be that much harder to do anything about existing fragmentation. Either move some stuff offline or upgrade to a larger drive.

If the drive itself is more than a few years old, odds are you'll be able to upgrade to a model that has at least twice as much storage, a faster bus type, faster rotational velocity and bigger on-board cache—all at the same price you paid for the original (or even less). All these things help offset any degradation of performance due to fragmentation.

3. Add more physical memory. Adding memory to any computer improves its performance across the board, including how well it deals with fragmentation. In general, if there's more memory, the system can devote more memory to caching hard drive access, and the overall effects of fragmentation are further diminished. Memory is cheap enough now that it makes sense to add a fair amount to start with. I'll pay off one way or another.

4. Do defrag, but not to excess. Set up a disk defragmentation schedule, but do it only when the process of disk defragmentation is not going to hobble performance—i.e., when you're not actually at the computer (or server) in question, or during a period of low system activity. Defragging more than once a week is pointless; the time spent doing the defrag far outweighs the benefits gained, especially if you do it that obsessively.

When you think about it, the war against fragmentation is really more of a subset of the ongoing struggle most people have against running out of storage space. One of the corollaries of storage space is that if you have it, it will almost always manage to get filled with something. People who could never conceive of filling a 250GB drive are now finding a whole galaxy of things to put on it, including MP3s ripped from their CD library or downloaded from online music services, video torrents, and virtual hard disk drives for VMWare or Virtual PC. Now that the space is there, not only is cramming a hard drive with every form of digital debris is possible, it's a little too easy.

The good news is that storage is not only continually becoming cheaper and faster, but more sophisticated. Solid state storage, where all the blocks in a given file system are contiguous and can be accessed at the same speed, might be the final death knell for fragmentation. But many of the negative effects are already being offset by technological developments that have been unfolding for some time.


Flash memory drive defragmentation: Does it make sense?

In a recent series of articles about the real impact of fragmentation in today's storage and operating systems, I concluded that while defragmenting was still useful, it had diminishing returns if used to excess. For instance, defragmenting more than once a week doesn't yield more than the most negligible benefits. . .unless you're deleting and adding a lot of files.

After reading the articles, someone emailed me to ask, "Do flash memory storage devices need to be defragmented?" At first I answered, "Probably not," but after some investigation, I came up with some justifications for defragging a flash memory device.

The big reason fragmentation has a harmful effect on hard disk drives is because it forces the drive to do more physical work to retrieve the same amount of data. The read/write heads have to move back and forth that much more, and the system sometimes has to wait on the drive platters to spin all the more, which incurs a cumulative performance penalty.

In short, the reason fragmentation causes perceptible performance problems is because drives have moving parts; they're not solid-state units, and they can't respond equally fast to every request for data.

On the other hand, flash memory devices have no moving parts. It takes an equally long time to retrieve any one byte of data as it does any other—or, if there is a delay, it's not something that is cumulatively measurable or perceptible to the end user. If a file gets fragmented on a flash memory device, it takes no measurably greater amount of time to retrieve it than if it is contiguous.

However, there are some flash memory devices that have very good sequential read performance, but very poor random read performance. This is not consistent across all flash memory devices, and it's probably a reflection of the way some flash memory is engineered. The way this came to light was through discussion of the ReadyBoost feature in Windows Vista, which allows a user to designate a flash memory device as swap space—provided the device is consistently fast.

Some flash memory devices use one block of very fast flash memory, but the rest of the device is composed of slower memory. Vista will report how much of the memory on the device is suitable for ReadyBoost; if it says some of it is too slow, that's a sign you have a device with mixed memory speeds. If such a device were defragmented, it might mean that blocks of data were being moved from slower memory into faster memory, which would explain a speed-up. But again, not all flash memory devices are engineered like this, so it's not a guideline for how they all might behave, and not a reason to recommend defragmentation unilaterally.

Then there's the question of what "contiguous" means on a flash memory device. Most flash memory devices also use wear-leveling strategies, which places an additional layer of abstraction between the data and how it's organized. This is done to keep the number of read/write cycles for any given block of memory from being prematurely exhausted.

This is why talking about a given file as "fragmented" on a flash memory drive is essentially meaningless; it could be stored by default in a number of blocks that are entirely disparate, and you'd never know. An argument could be made that the wear-leveling management mechanisms in a flash memory drive could, over time, create a kind of fragmentation effect. But again, the total bottleneck that such a thing would cause is probably too minimal to be measured or perceived.

According to an expert I talked to about this issue, another possible mechanism that might explain why a defragmented flash memory drive would run slightly faster than one that hasn't been fragmented is the total number of I/O operations required to retrieve a given set of data. A fragmented file requires more discrete I/O operations to fetch, so retrieving a number of fragmented files from such a device would probably accumulate a bit more I/O overhead than retrieving files that were contiguous. That said, without hard numbers to back this up, I have a hard time believing that the total I/O overhead in today's computers would create a cumulative delay that would be big enough to notice.

Most of the data I have seen to support defragmenting flash memory has been anecdotal and not based on hard numbers—i.e., someone reported that a flash memory drive was slow, defragmented it, and then found it to be running much faster, without any useful information about what other factors might have changed. As before, if the drive is slowing down, that may be a hint that you have a flash memory drive that uses a mixture of fast and slow memory–and you may simply want to look into replacing with a drive that isn't engineered that way.

In short, defragmenting flash memory is probably not worth it unless you can demonstrate that there is a perceptible speed improvement by doing so. The key word is perceptible, and unless you are using measurable and testable metrics for judging such a thing, you may not be witnessing anything other than subjective bias about how fast such things should be.


Three disk defragmentation issues defined

After my three-part discussion on fragmentation and disk defragmentation appeared on this site, the emails from site members came pouring in. Most of them agreed with my conclusions. . .up to a point. Others politely (and some not so politely) disagreed with specific points I made.

A vast wealth of material resulted from these discussions, so I created this fourth installment in the series to address the key issues readers brought up, and to refine (and revise) some of my earlier findings. I still feel that most of my original points are valid, but it helps to see them in a larger context.

1. Workstations and servers have radically different disk defragmentation needs.

Several readers pointed this out to me, and it makes sense. Some servers—especially database servers and file servers—allocate and re-allocate space much more aggressively than a desktop system. My original one-size-fits-all discussion didn't take into account the fact that a highly trafficked file server will have a markedly different fragmentation profile than a desktop PC.

For most machines with very busy file systems, a third-party defragmenter that works progressively in the background or at scheduled intervals (for instance, during off-peak hours) will be a boon. Aside from the major defrag applications, I've looked into solutions such as programmer Leroy Dissinger's defrag utility Buzzsaw. This tool monitors one or more hard disk drives and defragments individual files whenever disk and CPU usage drop below a certain point. I've used it on a server that runs a database-supported Web site and have gotten very good results with it.

2. NTFS has measures to alleviate fragmentation, but they're far from perfect; disk defragmentation is still needed.

There's no question that fragmentation occurs on NTFS volumes, and it's good to defragment an NTFS volume periodically. However, the measures that NTFS takes to alleviate fragmentation are not perfect. Having said that, it makes sense to do this only when it's not at the expense of existing system performance—i.e., once a week for a workstation, more regularly for highly trafficked servers (albeit on a schedule where the defrag takes place during off-peak hours).

One problem that some readers pointed out is that NTFS doesn't guard well against free-space fragmentation. In fact, one person argued that the way NTFS allocates free space can actually make things worse.

But there's no consensus on what to do about it. One approach is to move all available free space into one contiguous block whenever possible. Unfortunately, this approach is defeated instantly by the way NTFS allocates free space.

A newly created file gets placed in the first available block of free space on an NTFS partition. The size of that free-space block needs to be big enough both to hold the file and some overage, since the file will likely change sizes in the future. But once the file is written and closed, any space remaining in that block is then returned to the drive's pool of available free space. If multiple files get written in this fashion, a single consecutive block of free space can get turned into a series of fragmented free-space blocks.

To that end, it doesn't make much sense to aggressively defragment all free space into one big pool. Better to make it available in a number of reasonably sized (perhaps 64MB or so) pools. Microsoft concurs on this point in its documentation for the defrag tool in Windows 2000, saying that the effort involved to push all the free space together would negate any possible performance benefit.

I would argue that fragmented free space really becomes critical only when free space on a hard disk drive becomes extremely low—i.e., when the only space available is badly fragmented free space, and the system is forced to create new files in a highly fragmented fashion. But on a large enough drive, where the free space isn't allowed to go below 30%, this should almost never be an issue. There may still be fragmentation of free space, but large enough blocks of free space will almost certainly always exist somewhere on the drive to ensure that files can be moved or newly allocated without trouble.

I mention the 64MB figure as an adjunct to something I saw in the defrag utility now bundled with Windows Vista. By default, this utility will only attempt to consolidate fragments smaller than 64MB. My guess is that a fragment larger than 64MB is not going to impose as much of an overhead. Even if you have a very large file (a gig or more) broken into 64MB fragments, it won't matter as much because you're never reading more than a certain amount from the file at any given time. That is, unless you're accessing several such files at once, in which case the impact of fragmentation will take a back seat to the mere fact that you're reading multiple physical files from different parts of the disk.

3. Third-party disk defragmentation programs have more robust feature sets than the native Windows defragmenter.

This is undeniable. The native Windows defragmenting tool has only a fraction of the features offered by many of the commercial defragmention products, such as the ability to defragment system files after a reboot or to defragment the master file table (MFT) space.

But do these additional features justify themselves? That tends to be a factor of how aggressively the file system needs defragmenting in the ways that only a more advanced defragger can provide. For instance, on a file system where the MFT isn't trafficked as heavily (i.e., there aren't hundreds of thousands of files being created) and not at as great a risk to be fragmented, MFT disk defragmentation won't be as useful, since the MFT isn't going to be fragmented much to begin with. On an extremely busy server, this would be more useful; on a workstation with a lighter file-creation load, less so.

While the MFT zone can be defragmented offline, it cannot be shrunk or resized, and no third-party tool can do this. The only way to resize the MFT if it's been expanded is to copy all the files off a volume, format it, and copy them back on again. On the other hand, NTFS re-uses space within the MFT itself if there's no other free space to be had, and the MFT should almost never grow to be a sizable percentage of the drive's space. A tool like NTFSInfo from Sysinternals can give you details about the MFT zone on a given drive. If for some reason it's become an abnormally large percentage of the drive's space that might be a sign of something else being wrong.

There are other issues that I researched but came to no firm conclusions on. One, which is tangential to defragmenting free space, is the file placement issue.Windows XP, by default, analyzes file usage and tries to optimize access patterns to the most commonly used files in the system every three days. Some defragmenters (for instance, PerfectDisk from Raxco Software) work with this information to further optimize file access patterns, but it's not clear if this really does produce a performance improvement that's lasting and quantifiable.

(Incidentally, defragmenting the page file or Registry can be done without having to buy a separate application. For instance, the free tool PageDefrag, is perfect for this sort of work.)

In conclusion, I want to emphasize and clarify three things that might have gotten lost in my discussion.

Fragmentation still exists and is a problem. I was not dismissing its impact wholesale. But its impact has been alleviated by advances in hard disk drive technology, operating system design and file system design, and its impact will continue to be reduced (but not eliminated) by further improvements in all of the above.

It's still a good idea to defragment regularly, but there's little point in doing it obsessively when the real-world benefits might not be measurable in any reliable way. More than once a week for a workstation seems to cross the point of diminishing returns (although there are exceptions, which I'll go into). But the investment of time and system resources required to defrag once a day doesn't pay off except in the most incremental and difficult-to-assess fashion. (One exception to this would be programs that defragment progressively and "silently," like the aforementioned Buzzsaw, which usually run when the system is idle.)

You should balance the act of defragmenting against other ameliorative actions that could be taken, such as buying a larger or faster hard disk drive or adding more memory. Drives are cheaper and larger than ever. Memory is cheaper than ever, too. Adding more memory or upgrading to a faster, higher-capacity hard disk drive will almost always yield a better performance improvement than anything you can do through software.



Viestiä on muokattu lähettämisen jälkeen. Viimeisin muokkaus 21. tammikuuta 2009 @ 12:32

MikroMake
AfterDawn Addict
_
21. tammikuuta 2009 @ 12:49 _ Linkki tähän viestiin    Lähetä käyttäjälle yksityisviesti   
Tuolla myös juttua eheytyksestä sekä muutama ehetysohjelma:

http://keskustelu.afterdawn.com/thread_view.cfm/439972#2669579


Intel Haswell i7-4770K & Noctua NH-U14S
Asus Z87-A & Asus Strix GTX 970
Fractal Design Define R4 & Seasonic 660W P-660 (XP²)
Samsung 256GB 840 Pro SSD & Western Digital 1TB Caviar Black
Junior Member
_
21. tammikuuta 2009 @ 14:33 _ Linkki tähän viestiin    Lähetä käyttäjälle yksityisviesti   
XP:n oma eheytys softa on yhtä tyhjän kanssa.O&O on myös hyvä.
AfterDawn Addict

5 tuotearviota
_
21. tammikuuta 2009 @ 15:20 _ Linkki tähän viestiin    Lähetä käyttäjälle yksityisviesti   
Loistavat perustelut.


Mainos
_
__
 
_
Naivor
Newbie
_
22. tammikuuta 2009 @ 18:48 _ Linkki tähän viestiin    Lähetä käyttäjälle yksityisviesti   
Kiitoksia Yamanekolle tekstin pasteamisesta, oli loppujen lopuksi yllättävän mielenkiintoista tekstiä. Kuitenkin, kyllä se defragi kannattaa yhen kerran vuodessa vetää jos ei vuosittain konetta ja kovalevyä vaihda, onhan se kuitenkin koneen yleistä huoltoa.

Makelle kiitoksia linkistä, pitää myöhemmin kun on enemmän aikaa katsoa vähän tarkemmin läpi.
afterdawn.com > keskustelu > yleistä keskustelua tietokoneista > ajuri- ja softaongelmat > ehdotuksia järjestelmän eheytykseen?
 

Apua ongelmiin: AfterDawnin keskustelualueet | AfterDawnin Vastaukset
Uutiset: IT-alan uutiset | Uutisia puhelimista
Musiikkia: MP3Lizard.com
Tuotearviot: Laitevertailu | Vertaa puhelimia | Vertaa kännykkäliittymiä
Pelit: Pelitiedostot, pelidemot ja trailerit
Ohjelmat: download.fi | AfterDawnin ohjelma-alueet
International: AfterDawn in English | Software downloads | Free, legal MP3s | AfterDawn på svenska
RSS -syötteet: AfterDawnin uutiset | Uusimmat ohjelmapäivitykset | Keskustelualueiden viestit
Tietoja: Tietoa AfterDawn Oy:stä | Mainosta sivuillamme | Sivuston käyttöehdot ja tietoja yksityisyydensuojasta
Ota yhteyttä: Lähetä palautetta | Ota yhteyttä mainosmyyntiimme
 
  © 1999-2025 AfterDawn Oy