In my previous post I focused on how Firefox compares against itself with multiple content processes. In this post I’d like to take a look at how Firefox compares to other browsers.
For this task I automated as much as I could, the code is available as the atsy project on github. My goal here is to allow others to repeat my work, point out flaws, push fixes, etc. I’d love for this to be a standardized test for comparing browsers on a fixed set of pages.
As with my previous measurements, I’m going with:
total_memory = RSS(parent) + sum(USS(children))
An aside on the state of WebDriver and my hacky workarounds
When various WebDriver implementations get fixed we can make a cleaner test available. I had a dream of automating the tests across browsers using the WebDriver framework, alas, trying to do anything with tabs and WebDriver across browsers and platforms is a fruitless endeavor. Chrome’s actually the only one I could get somewhat working with WebDriver.
Luckily Chrome and Firefox are completely automated. I had to do some trickery to get Chrome working, filed a bug, doesn’t sound like they’re interested in fixing it. I also had to do some trickery to get Firefox to work (I ended up using our marionette framework directly instead), there are some bugs, not much traction there either.
IE and Safari are semi-automated, in that I launch a browser for you, you click a button, and then hit enter when it’s done. Safari’s WebDriver extension is completely broken, nobody seems to care. IE’s WebDriver completely failed at tabs (among other things), I’m not sure where to a file a bug for that.
Edge is mostly manual, its WebDriver implementation doesn’t support what I need (yet), but it’s new so I’ll give it a pass. Also you can’t just launch the browser with a file path, so there’s that. Also note I was stuck running it in a VM from modern.ie which was pretty old (they don’t have a newer one). I’d prefer not to do that, but I couldn’t upgrade my Windows 7 machine to 10 because Microsoft, Linux, bootloaders and sadness.
I didn’t test Opera, sorry. It uses blink so hopefully the Chrome coverage is good enough.
The big picture
The numbers
OS | Browser | Version | RSS + USS |
---|---|---|---|
OSX 10.10.5 | Chrome Canary | 50.0.2627.0 | 1,354 MiB |
OSX 10.10.5 | Firefox Nightly (e10s) | 46.0a1 20160122030244 | 1,065 MiB |
OSX 10.10.5 | Safari | 9.0.3 (10601.4.4) | 451 MiB |
Ubuntu 14.04 | Google Chrome Unstable | 49.0.2618.8 dev (64-bit) | 944 MiB |
Ubuntu 14.04 | Firefox Nightly (e10s) | 46.0a1 20160122030244 (64-bit) | 525 MiB |
Windows 7 | Chrome Canary | 50.0.2631.0 canary (64-bit) | 1,132 MiB |
Windows 7 | Firefox Nightly (e10s) | 47.0a1 20160126030244 (64-bit) | 512 MiB |
Windows 7 | IE | 11.0.9600.18163 | 523 MiB |
Windows 10 | Edge | 20.10240.16384.0 | 795 MiB |
So yeah, Chrome’s using about 2X the memory of Firefox on Windows and Linux. Lets just read that again. That gives us a bit of breathing room.
It needs to be noted that Chrome is essentially doing 1 process per page in this test. In theory it’s configurable and I would have tried limiting its process count, but as far as I can tell they’ve let that feature decay and it no longer works. I should also note that Chrome has it’s own version of memshrink, Project TRIM, so memory usage is an area they’re actively working on.
Safari does creepily well. We could attribute this to close OS integration, but I would guess I’ve missed some processes. If you take it at face value, Safari is using 1/3 the memory of Chrome, 1/2 the memory of Firefox. Even if I’m miscounting, I’d guess they still outperform both browsers.
IE was actually on par with Firefox which I found impressive. Edge is using about 50% more memory than IE, but I wouldn’t read too much into that as I’m comparing running IE on Windows 7 to Edge on an outdated Windows 10 VM.
no firefox on ubuntu?
why does firefox os uses 2x the memory of firefox windows?
This data includes both Chrome and Firefox on Ubuntu.
Comparing memory usage across operating systems doesn’t make as much sense, but I agree that the numbers on OSX are rather high. I don’t currently have an explanation for that.
I always find it interesting how firefox is well ahead in synthetic benchmarks, but in the real world I see it climb very high indeed.
What also worries me is the slow memory usage creep that is clearly visible on AWSY.
It would be also interesting to do those tests with popular extensions such as adblockplus, noscript, ublock, umatrix, etc.
Also on some websites that are “bad offenders” when it comes to memory usage. Google docs can hold on to more than 200M for some reason. 100+M for the main tab and 100+M for a web worker, for some reason.
I also experienced that the memory usage depends on my graphics settings. When I force hardware accelerated layers on linux, I get a large number (sometimes 40%+) of heap-unclassified. Digging deeper with DMD reveals that most of that memory is allocated in the intel driver, probably for gl related things. So while its not really firefox itself allocating that memory, it still contributes to the whole.
I also run very long lived firefox sessions, extending up to a week long. And when it reaches ~800M for the parent and ~1.5G for the content process, I usually restart it. So long running sessions would be also interesting to see.
Anyway, thanks for doing the research, I’m eager to read more 🙂
Thank you for the thoughtful comments.
Fully zoomed out it looks pretty drastic, if we zoom in to just the last year (2015 – 2016) it ends up being more of a 20MiB increase. At that amount it feels like we’re doing okay given the amount of functionality that has been added to Firefox. I agree though, I’d like to see less growth!
This is a pretty neat idea, I wonder if we could partner w/ the AMO team to set something up. Something like: for every new build of a popular add-on we run it through AWSY on the latest release and compare memory usage from the last build.
It might be worth filing a bug about this and adding
[MemShrink]
to the whiteboard, it’s possible we’re doing something wrong. Attaching the DMD reports and STR would be super helpful.We’ve been seeing a fair amount of reports along these lines which is a bit unsettling. If you happen to have DMD enabled, filing a bug with memory reports would be helpful.
I’ve noticed that memory usage of Firefox on my Debian used to become high after some downloads so that I had to restart it to lower its memory usage. Is it something that has already been noticed ?
Apart from that, e10s works well with Iceweasel 46.0a2 on Debian Sid 64 bits 🙂
FWIW I am interested in solving the Firefox WebDriver bugs, but at the moment it’s behind treeherder-autoclassification on my priority list. Once I have an autoclassification UI that works well enough for the sheriffs to use I’ll have more time to work on wires features (there are also some specification issues that need to be solved e.g. when I looked at implementing the actions spec it turned out that the text wasn’t something I could work from).
Completely understood! I was going to see if I could pitch in, noticed it was all in rust and moved on 🙁 Maybe next quarter I can finally fulfill my “do something in rust” goal by pitching in on wires.
The issues described with WebDriver is something we worked around with Leadfoot ( https://github.com/theintern/leadfoot )… basically we feature test against all known defects before we run our tests.
Anyway, interesting metrics and research, it’s very much appreciated.
What about firefox with e10s enabled?
This is with e10s enabled.
I’d be interested to see how Vivaldi stacks up, obviously going to share some of the traits with Chrome, but have you seen if one or the other does a better job with what they add on top of the underlying engine ?
I haven’t tested Vivaldi (nor Opera), it might be worth checking out after they’re out of beta.
It was pointed out to me by a Facebook friend that RSS+USS is kinda meaningless (and double-counts most of USS). PSS or USS would be in some ways the most meaningful (mostly PSS, since libs shared by Master and Content processes (e10s) would get excluded from USS, and also shmem buffers shared with GFX processes (or between Master and Content). Also, for Chrome, their shared libs will be open by the UI and all the child processes, so PSS makes total sense.
Can you give the PSS graph? Or if you like, separate RSS and PSS graphs.
Your friend is describing RSS + RSS, which we don’t do. USS is memory unique to the process, not shared.
We are already doing USS for the children, RSS for the parent. The presumption here is that shared portion (libxul and friends, shmem) is accounted for in the RSS of the parent process and covers the shared portion of the children. This may be more true for Firefox than Chrome.
I can’t, PSS is not available on OSX and Windows. As I said, we’re not summing RSS.
PSS is also interesting (well not interesting) in that if I open libgigantor (100MiB) and it’s already open by the system, then my PSS is 50MiB. But if I shutdown, libgigantor is still loaded by the system, so there’s no change in observable memory usage for the overall machine.
It wasn’t clear you were doing RSS(Master)+USS(Content) (I glossed that, and read the friend’s comment first); that makes more sense, though. RSS will include shared libs like X which as you point out for PSS aren’t released if we exit. It does amortize them over all the processes, however (for libxul, PSS would divide it among all the children and the master, though RSS(master) + USS (Content) will produce the same result regarding libxul at least.
In any case, this is Good Stuff. What’s the content set loaded for each look like? How many tabs in use when measured? What are the relative slopes for increasing tabs among different browsers? All interesting questions…
For this test I used the first 30 pages of TP5 waiting 10 seconds before loading the next page in a new tab, waited 60 seconds, then measured memory.
Measuring memory as tabs increase is a bit finicky as each site can have a different memory profile. We’d have to do some trickery with loading the same page from multiple domains to get a sane result.
For now, there are workarounds.
An update would be interesting