SSD near to die ?

That makes your SSD at least 6 years old if you used it 24/24 7/7 … The first commercial SSDs appeared in 2011.
Odd or impressive. :slight_smile:

sorry I have been absent from the conversation. Non-Commerical Grade SSDs are designed to work 8/7 (8ish hours a day/7 days a week) and run for about 5years (or so). Commerical Grade SSDs are designed to work 24/7 for 7 years (or so). Now if your host is sitting there “idle” and not pushing IOs to the drives, then it is not counted in the “timings”.

I still have several drives from late 2011/early 2012 running with no real issues today. All Non-Commerical Grade SSDs. Now as time goes on, the drives are getting better and better performance, and lifespan.

Normally when I find “storage issues” with SSDs, it is how the drive is being used and not that drive itself.

thanks from you local storage expert.

I checked again in the log of the SSD usage. The last reporting time was 50443 hours. I thought it was little over 52000 hours, so I get wrong there.

These hours will convert into 5,7 years. Still there seems to be some mistake in the automatic calculation because I bought the first SSD (a cheap but faster OCZ) in november 2011. It was lightning fast, faster than the Intel which I use now but died unexpectedly in less than one year.

From 1 november 2012 I am using this Intel Speed Demon SSD without any problem. However it means I am almost using it for 4 years now.

I suspect but I am not sure, the application which calculate the working hours, takes into account how heavy the drive is used. But again I do not know.

Nevertheless, the SSD is working fine and has no errors on this moment. At the time I bought this SSD, it was very expensive. Around 460 euro for 240 GB. SSDlife tells me I can use it now for another 8 years.

I think the moral of the story is that as long as you use and respect your SSD properly, it will last long. Don’t expect much from cheap SSD’s because the OCZ was a cheap and very fast one but was damaged within a year.

Samsung is known for their limited lifespan. Shortly after the waranty is gone, a device dies and have to be replaced. It is not always that way but can be. However I have a DVD recorder bought in 2008, a television bought in 2013, both from Samsung and they are working very fine. I think you have to get lucky sometimes.

Chris

The company behind MacBooster, Advanced SystemCare, MalwareFighter is called IObit and is located in China.

I never heard such things as if they clone, or abuse intellectual properties. When there is a translation, I just get contacted by email and an .lng file is included which I have to translate. When the translation is finished, I just reply to the email and include the finished translation as an attachment. I can honestly say, that I do not know anything of illegal actions. In the content of those translations, I cannot find anything who points to illegal use of software or theft. Believe me, if I would find anything like that I would not translate for sure.

When you make translations for IObit, you are not been paid but receive free licences for one year, so you can use their software for free. it is even allowed to sell licences to other people. I just give the licences away to people with problems. I do not ask anything in return despite some translations are quit long and time consuming.

I am completely against stealing other peoples hard work and ideas. I also do not like illegally making copies of software.

I am sure we all realise that at the end of the day a developer must get paid. We all have to make a living for our family and ourselves. All software I use is paid for, when I need a second licence, I pay for it.

Thank you for bringing this up to me, I do appreciate it.

Chris

a “standard” evo samsung ssd has a warranty of 5 years
a pro samsung ssd has a warranty of … 10 years !
not sure the laptop will not die before !

I’ve had a failure rate of about 35% on SSDs (but I was buying them and abusing them from the early 2000s). Unlike spinning hard drives, they tended to fail without warning.

But, more recent purchases have not had this problem, and I’ve said to myself “no more spinning hard drives, ever!” - SSDs are just so much better.

I suspect that soon, one will not ask about “SSD failures” just as one doesn’t ask about “CPU failures” or “RAM failures” – reliability will be so good that it’s simply not a worry.

Since my original question, i followed Firefox clue and discovers that… Firefox is able to slow down the whole computer speed under unknown conditions.

How do I know ?
When Firefox is not running, I searched in the system files a folder called Firefox, renamed it Old-Firefox, then launch Firefox and all-of-a-sudden, I get my original speed back. In both browser tab scrolling and in Finder operations.
I suspect I go to a web site that is responsible of that, but I do not know his name.

Quitting the Finder also help to speed up Firefox (both scrolling in a Tab and in downloadings). Here too, a file is guilty. I forgot its name, but trashing that file (a plist file) set the Finder to its original file.

In both cases, I suspect that one or more events are fired (or a program is fired and…) and this slow down the whole process. cmd-n + press the keybord keys to set a file name usually grap the first (one or more, far more) typed characters (when the computer is running slow): take a look while doing that.

At last, the more items you have in your Desktop folder (the Desktop), the slower Finder operations are excecuted. This can be proved simply: if you have 200 to 500 (or more) items in your Desktop, select all, put them in a Folder and move this folder to your Documents folder. The speed gain is impressive.

Did I told you about how slow the Finder is when you start to be short of room in your boot disk ? Move to an external hard disk data to get at least 50GB of free rooms in your boot disk, shut down, reboot and enjoy.

None of the above in hot conditions (hot conditions slow down very fast the whole computer speed).

For the Finder slow down because of a bad plist file: in the last five years or so, I got all kinds of troubles: from the Finder who is not launched at boot time (there is a preference in that file that allows that) to … the first file reported by the cmd-F process window that is not displayed and so many other things… I do not recall / do not want to cry thinking at them. That file seems to be easy to be corrupted (but I do not know by what). More than 20 seconds of writing things at SD-Card / MemoryCard/External hard disk eject time. Do you know how many items the Finder wrote on an external mass storage device ? A ton (or two) ! I do not know why too who does that and what it does.

Xojo on OS X slowdown ? Easy: add some 1024 x 1024 icons in your project and he will see his size on disk going to 20MB (or more, depending on how many 1024 x 1024 icons you put on your project). 2015r1. I suppose 2016r3 works the same, but I forgot if I checked.
The slowdown appears at save time. At load time: certainly the same, but since at that time I am starting a session, it does not bother me. At session end or a save before a run: it bother me since I am waiting.
A delete of the 1024 x 1024 size icon (on all icons of the project) resize down the project and you gain a speed up (the time to save the project slowdown).
Of course, this does not appears on Windows since the largest icon is 256 x 256 (I think I am right here)?

Interesting article about Firefox and SSD

Firefox eating SSD

Yes Axel, an interesting article.

In the mean time, I discovers that if I remove all “ads” urls from History, the slowdown is nearly unnoticeable.

Note: I do not know if this is really related to ads URLs or the number of entries in History.

Also: if I rename the firefox folder in one o the Libray folders (so a brand new one is created), there is no slowdown until at some point where it comes back. Since passwords, bookmarks, etc. are in that folder, doing tests takes times and are boring. So, I stop to be tortured my brain with that.

I solved the problem in another way : I switched recently from firefox to opera …
opera was eating ram continuously (2-3 GB of ram for this app !) since some months.
I supposed it was an incompatible plugin, but was not able to isolate it
also firefox was crashing more than one time per day
now opera uses some 500MB of ram, and works fine. it imported all my bookmarks and most of the passwords.

I realize I’m late to the conversation, but…

Not only should you never try to defragment a SSD, you quite literally cannot defragment a SSD. There’s no way for an application or even the OS to organize files in contiguous blocks on a SSD. SSDs have wear leveling algorithms which means the controller’s #1 concern is keeping the number of writes even across memory cells. The layout a SSD controller provides to the OS is a virtual one. There’s another level of indirection within the controller itself which means even if you think a file is in contiguous blocks in reality it’s just as likely to be spread across the drive.

If you use a hard disk defragmenter on a SSD not only do you waste some of your limited write cycles, but the blocks will end up just as fragmented as before.

Because access is truly random (or so close as to be irrelevant) there’s no speed issue. The controller can grab blocks from the beginning, middle, and end of the flash memory address range and hand them off to the OS faster then any hard drive can read three blocks sequentially.

As for lifespan, I think you’re OK unless you’re writing hundreds of GBs per day 365 days a year: http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

[quote=304159:@Daniel Taylor]I realize I’m late to the conversation, but…

Not only should you never try to defragment a SSD, you quite literally cannot defragment a SSD. [/quote]

I think we’ve had this come up before, but it’s not quite so clear when you are talking about defragmenting a Virtual Machine (VM) on a SSD. It may be useful to defragment the VM in order to free up contiguous block space so you can shrink it’s VM container.

No.

“Continuous blocks” is a concept that makes sense with hard disks, but not with SSDs.

Maybe this explanation:

data in a hard disk are stored in a round hardware that turns and is used on both sides. They are stored from the first available block from the beginning of the hard disk (at boot time). This process leads into file fragmentations after a bunch of delete and new files. When you reboot, after a delete of one or many files, new files start to be stored at the first available block, etc. *
The file fragmentation produce a latency because the (traditional) hard disk takes time to move the disk from a block to another in the hard disk. Imagine the time you will waste if you have to read a book skipping pages following a shema like: read page 1, then go to page 15, then to page 26, etc. for Chapter 1. Then for Chapter 2, you start at page 2, then 8, etc. Fortunately, there is quite never fragmentation in printed books. :wink:

data in a SSD are stored in a special memory built in . How are they stored in a SSD ?
Since there is no running hardware nor read / write heads, how can there be a latency period to get the all the file data from a SSD ?

Solid-state_drive @ wikipedia.

  • Long time ago, fragmentation appears in Word (from Microsoft) files. After each save, the new data were stored at the end of the previous version. The only way to get a file exact representation of the document from the window was to save as the file.
    I mean: each time you make a save, the new data (or delete data) are saved at the end of the file. If you load that file as text, you will saw just that: a full garbage with your text appearing in disorder, your deleted text still there, etc. This was like that 20 / 25 years ago. I do not used that software since the mid 90s, so I cannot tell about recent versions.

The main thing is not to run an SSD drive too full. At least with most of the current generation of drives. Once it’s more than about 2/3 full there will be a lot more work going in to allocating space and shuffling things around. If you can’t get it down to about 60% full then you probably need to upgrade to a bigger drive.

True, but the VM virtual hard drive is in fact a file onto that SSD, which itself could be fragmented. Given the extremely fast access time of an SSD, though, I don’t think defragmenting a virtual disk would do any appreciable good.

[quote=311129:@Markus Winter]No.

“Continuous blocks” is a concept that makes sense with hard disks, but not with SSDs.[/quote]

Defragmenting a VM drive file…from within the VM…will never result in contiguous blocks on a SSD. But it will result in the blocks having a contiguous address space as viewed by the VM, after which the VM can shrink the drive file. This is the benefit Michael points out.

I don’t believe there would be any noticeable performance benefit, as Michel points out.

I’m not sure if this is necessary or beneficial any more though. I remember doing this once years ago to recover space with a VMWare drive that was basically one big file. But now? Parallels VM drive files are macOS packages. Dig into them and you find the drive is split into separate files up to 2GB each. Looking at the sizes, and considering the fact that Parallels can report free space and shrink the drive at any time, I think Parallels can release free blocks regardless of where in the address space they reside. I’m sure VMWare has similar technology today.

Indirection into indirection into indirection…and probably another one or two levels of indirection. But it works.

Are you using Firefox or Chrome? Then you should read this:

https://www.servethehome.com/firefox-is-eating-your-ssd-here-is-how-to-fix-it/

Firefox is eating your SSD – here is how to fix it
If you are a user of Firefox we have a must-change setting. Today’s modern multi-core processor systems and higher quantities of RAM allow users to open multiple Firefox tabs and windows simultaneously. This can have an unintended effect for those SSDs as session store data can write data constantly to NAND. This issue is being discussed in a STH forum thread where you can follow the discussion.

Observing the Issue: Heavy SSD Writes from Firefox

Purely by chance, I fired up a free copy of SSDLife on two consecutive days where I haven’t really used my workstation for anything other than email and browsing. For those of you unfamiliar with this tool, it simply reports estimated lifetime for the attached SSD and it also shows the amount of data read and written.

In my case, SSDLife notified me that 12GB was written to the SSD in one day. Since I didn’t recall downloading any huge files over the previous day or visiting any new sites that could’ve resulted in bringing down a lot of new content to the cache, this puzzled me. I monitored these stats over the next couple of weeks and this behavior stayed consistent. Even if the workstation was left idle with nothing running on it but a few browser windows, it would invariably write at least 10GB per day to the SSD.

firefox-with-32gb-written-in-a-single-day
firefox-with-32gb-written-in-a-single-day

To find out what’s going on, I fired up Resource Monitor and looked at disk utilization.

Firefox Disk Writes

At the very top of the list was Firefox, writing tirelessly at anywhere between 300K and 2MB per second to a file called “recovery.js”. Researching revealed that this is Firefox’s session backup file that is used to restore your browser sessions in case of a browser or an OS crash. That is extremely useful functionality. I was aware of the fact that Firefox had this feature, but I had no idea that session information was so heavy!

Researching the issue a bit more over the next day, I discovered that things are worse than I originally thought and “recovery.js” isn’t the only file involved. In case someone wants to replicate, here’s what I did this morning:

I reset browser.sessionstore.interval to 15000 and then got rid of all my currently open FF windows.
I opened a single window with just Google running in it, left it sitting for a couple of minutes, and then closed it.
I started the browser again and on this final restart the recovery.js file was only 5KB in size, down from around 900KB before.
Next, I opened a bunch of random reviews for Samsung 850 pro and Samsung Galaxy S7 in two separate windows. Simply searched for “samsung 850 pro review” and “samsung galaxy s7 review” and then went down the list of results opening them in new tabs.
I opened a 3rd window and created a bunch of tabs showing front pages for various news sites.
I launched Process Monitor and configured it to track recovery.js and cookie* files:
Firefox Disk Activity Process Monitor

I went to File->Capture Events and disabled it. Cleared all events that were currently showing up.
I went back to File->Capture Events and re-enabled it. Left the three FireFox windows sitting idle for 45 minutes while I was using Chrome instead.
Then I went to Tools->File Summary to get overall stats.
Firefox managed to write 1.1GB to disk with the vast majority of data going into cookie* files.
Firefox Disk Activity File Stats

Note that recovery.js managed to accumulate only about 1.3MB of writes.

I went back to one of the Firefox windows and opened my outlook.com mailbox. Cleared all events in Process Monitor and re-started the capture. Again, I left all Firefox windows sitting idle, but only for ~10 minutes. This time recovery.js was at ~1.5MB and it took only about 1/4-th of the time to get there. Cookie* files had a ton of data written to them, as before.

Firefox Disk Activity File Stats 2

Depending on what you’ve got open in your tabs, Firefox could be dumping tons of data into recovery.js, cookie* files, or both. Running at 1.1GB for every 45 minutes, you’re looking at ~35GB/day written to your SSD if you leave your machine running. And at least in my case this wasn’t even the worst example of how much data could be going into recovery.js. In my original tests I clocked Firefox at 2MB/s writing to this file and the writing thread never went dead always showing up on the top of the list in Resource Monitor.

The Easy Fix

After some digging, I found out that this behavior is controlled by a parameter that you can access through typing “about:config” in the address bar. This parameter is called: —browser.sessionstore.interval

It is set to 15 seconds by default. In my case, I reset it to a more sane (at least for me) 30 minutes. Since then, I’m only seeing about 2GB written to disk when my workstation is left idle, which still feels like a lot but is 5 times less than before.

Bottom line is that if you have a lower capacity consumer level SSDs in some of your machines, you may want to check and tweak your Firefox config. Those drives can be rated for about 20GB of writes per day and Firefox alone might be using more than half of that. This is especially true if you’re like me and have a several browser windows open at all times each with numerous tabs. Changing this parameter may even help with normal HDDs. Your machine will feel faster if it doesn’t have to constantly write this session info. We have seen in the STH forum thread that content open in browser does have a major impact on writes as does the number of open windows and tabs. If you are using Firefox and a lower write endurance SSD you should check this immediately.

If you are wondering how this compares to real-world enterprise SSD usage, STH did a study buying hundreds of used enterprise/ datacenter SSDs off of ebay and checking SMART data for actual DWPD usage. See: Used enterprise SSDs: Dissecting our production SSD population

Update 1: We are testing other browsers. Currently in the middle of a Chrome Version 52.0.2743.116 m test. We have been able to see a pace of over 24GB/ day of writes on this machine (see here.)

Take care

MfG

Markus

Sent from my iPad

Thank you Markus.

[quote=311224:@Markus Winter]Are you using Firefox or Chrome? Then you should read this:

https://www.servethehome.com/firefox-is-eating-your-ssd-here-is-how-to-fix-it/

Firefox is eating your SSD – here is how to fix it[/quote]

We talked about it here 4 weeks ago :slight_smile: