Over the past year, the Fission MemShrink project has been working tirelessly to reduce the memory overhead of Firefox. The goal is to allow us to start spinning up more processes while still maintaining a reasonable memory footprint. I’m happy to announce that we’ve seen the fruits of this labor: as of version 66 we’re doubling the default number of content processes from 4 to 8.
Doubling the number of content processes is the logical extension of the e10s-multi project. Back when that project wrapped up we chose to limit the default number of processes to 4 in order to balance the benefits of multiple content processes — fewer crashes, better site isolation, improved performance when loading multiple pages — with the impact on memory usage for our users.
Our telemetry has looked really good: if we compare beta 59 (roughly when this project started) with beta 66, where we decided to let the increase be shipped to our regular users, we see a virtually unchanged total memory usage for our 25th, median, and 75th percentile and a modest 9% increase for the 95th percentile on Windows 64-bit.
Doubling the number of content processes and not seeing a huge jump is quite impressive. Even on our worst-case-scenario stress test — AWSY which loads 100 pages in 30 tabs, repeated 3 times — we only saw a 6% increase in memory usage when turning on 8 content processes when compared to when we started the project.
This is a huge accomplishment and I’m very proud of the loose-knit team of contributors who have done some phenomenal feats to get us to this point. There have been some big wins, but really it’s the myriad of minor improvements that compounded into a large impact. This has ranged from delay-loading browser JavaScript code until it’s needed (or not at all), to low-level changes to packing C++ data structures more efficiently, to large system-wide changes to how we generate bindings that glue together our JavaScript and C++ code. You can read more about the background of this project and many of the changes in our initial newsletter and the follow-up.
While I’m pleased with where we are now, we still have a way to go to get our overhead down even further. Fear not, for we have a quite a few changes in the pipeline including a fork server to help further reduce memory usage on Linux and macOS, work to share font data between processes, and work to share more CSS data between processes. In addition to reducing overhead we now have a tab unloading feature in Nightly 67 that will proactively unload tabs when it looks like you’re about to run out of memory. So far the results in reducing the number of out-of-memory crashes are looking really good and we’re hoping to get that released to a wider audience in the near future.
This is a continuation of my Are They Slim Yet series. For background see my previous installment.
Firefox’s upcoming release 57 has a huge focus on performance. We’ve quantum-ed all the things but we haven’t really talked about memory usage, which is something that often falls by the wayside in the pursuit of performance. Luckily since we brought AWSY in tree it’s been pretty easy to track memory usage and regressions even on separate development branches. The Stylo team was a big user of this and it shows, we flipped the switch to enable Stylo by default around the 7th and you can see a fairly large regression, but by the 16th it was mostly gone:
Hopefully I’ve convinced you we’ve put a lot of work into performance, now let’s see how we’re doing memory-wise compared to other browsers.
The methodology for the test is the same as previous runs: I used the ATSY project to load 30 pages and measure memory usage of the various processes that each browser spawns during that time.
The results
Edge has the highest memory usage on Windows, Chrome comes in with 1.4X the memory usage of Firefox 64-bit on Windows, about 2X Firefox on Linux. On macOS Safari is now by far the worst offender in memory usage, Chrome and Firefox are about even with Firefox memory usage having gone up a fair amount since the last time I measured.
Overall I’m pretty happy with where we’re at, but now that our big performance push is over I’d like to see us focus more on dropping memory usage so we can start pushing up the number of content processes. I’d also like to take a closer look into what’s going on on macOS as that’s been our biggest regression.
Note: I had to run the test for Safari manually again, they seem to have made some changes that cause all of the pages from my test to be loaded in the same content process.
Although not as active, we still have a MemShrink group at Mozilla. We’ve transitioned from an all out assault on memory usage to mostly just attempting to keep memory usage sane. I wasn’t around when things started, but when I joined there were at least seven people actively attending our MemShrink triage meetings, now we’re down to two. Some members have moved on, others have transitioned through, but really it comes down to the fact that we did a pretty good job of getting memory under control and with limited resources there were more important tasks to look at.
Fear not, we haven’t abandoned the project. We’re just in a bit of a lull. With big pushes for multiple content processes and the Quantum project I think we’re going to see the need to ramp up MemShrink again. In the meantime rest assured we’re still chugging along, just at a slower pace.
Big Ticket Items – 2014
Three years ago Nicholas Nethercote wrote a blog post celebrating MemShrink’s 3rd birthday and put together a list of important work we saw coming up. Lets see how those projects went.
The devtools team added a memory tab. Dan Callahan and Nick Fitzgerald put together a nice writeup of the new memory tool. There’s more work that can be done, but most of the devtools team’s focus is on performance profiling these days. It sounds like it could become a priority again next year.
GC Arena Fragmentation
Jon Coppeard did some heroic work (64 patches!) and got compacting GC landed. Initial measurements showed an 8% reduction in JS memory usage which is quite impressive. You can read more details in a blog post by Jon about [compacting garbage collection in SpiderMonkey].(https://hacks.mozilla.org/2015/07/compacting-garbage-collection-in-spidermonkey/)
Tarako
We actually shipped the 128MB phone! It never took off in it’s target market and eventually the entire FirefoxOS project was shut down, but I’m still super impressed we achieved such a feat.
We had hopes that upgrading our memory allocator would help as well, but we’ve since abandoned that effort.
Big Ticket Items – 2017
That was a nice trip down memory lane, but now we need to look forward. Let’s take a look at some of what I see as our next big ticket items.
Reduce JS memory usage and increase sharing of data across processes
The JavaScript engine is probably our biggest target coming up for reducing memory usage, particularly with multiple content processes enabled. There’s some impressive work going on to have our core JavaScript modules share a single global. Initial testing has shown some pretty big wins for this.
In general we need think about ways to share more data across processes.
Improved devtools for memory analysis
The devtools team did a great job with their initial iteration of memory profiling, but it would be great to see a more refined UI and tie in information from our cycle collector on the C++ side.
Expanded testing
I’d like to get the ATSY project automated so that we can get consistent numbers on how we fare against other browsers. This has been a boon for JavaScript performance, I can see it being a good motivator for improving memory usage as well. An updated test corpus that uses modern web features would be a big improvement. Making it easier to track the memory impact of WebExtensions would also be great.
Conclusions
We ticked off 4 out of 5 of our big ticket items. 64-bit builds on Windows by default is just around the corner so lets just go ahead and count that as 5 out of 5. I see plenty of future challenges for the MemShrink group particularly once the dust settles from enabling multiple content processes and the various Quantum projects.
Let me know if I missed any big improvements, I’m sure there are plenty!
Aside from some pangs of nostalgia, it is with great pleasure that I announce the retirement of areweslimyet.com, the areweslimyet github project, and its associated infrastructure (a sad computer in Mountain View under dvander’s desk and a possibly less sad computer running the website that’s owned by the former maintainer).
You can build your own graph from Perfherder. Just choose ‘+ Add test data’, ‘awsy’ for the framework and the tests and platforms you care about.
Wait, why?
I spent a few years maintaining and updating AWSY and some folks spent a fair amount of time before me. It was an ad hoc system that had bits and pieces bolted on over time. I brought it into the modern age from using the mozmill framework over to marionette, added support for e10s, and cleaned up some old slightly busted code. I tried to reuse packages developed by Mozilla to make things a bit easier (mozdownload and friends).
This was all pretty good, but things kept breaking. We weren’t in-tree, so breaking changes to marionette, mozdownload, etc would cause failures for us and it would take a while to figure out what happened. Sometimes the hard drive filled up. Sometimes the status file would get corrupted due to a poorly timed shutdown. It just had a lot of maintenance for a project with nobody dedicated to it.
The final straw was the retirement of archive.mozilla.org for what we call tinderbox builds, builds that are done more or less per push. This completely broke AWSY back in January and we decided it was just better to give in and go in-tree.
So is this a good thing?
It is a great thing. We’ve gone from 18,000 lines of code to 1,000 lines of code. That is not a typo. We now run on linux64, win32, and win64. Mac is coming soon. We turned on e10s. We have results on mozilla-inbound, autoland, try, mozilla-central, and mozilla-beta. We’re going to have automated crash analysis soon. We were able to use the project to give the greenlight for the e10s-multi project on memory usage.
Oh and guess what? Developers can run AWSY locally via mach. That’s right, try this out:
mach awsy-test --quick
Big thanks go out to Paul Yang and Bob Clary who pulled all this together — all I did was do a quick draft of an awsy-lite implementation — they did the heavy lifting getting it in tree, integrated with task cluster, and integrated with mach.
What’s next?
Now that we’re in-tree we can easily add new tests. Imagine getting data points for running the AWSY test with a specific add-on enabled to see if it regresses memory across revisions. And anyone can do this, no crazy local setup. Just mach awsy-test.
Goal: Replace Gecko’s XML parser, libexpat, with a Rust-based XML parser
Firefox currently uses an old, trimmed down, and slightly modified version of libexpat, a library written in C, to support parsing of XML documents. These files include plain old XML on the web, XSLT documents, SVG images, XHTML documents, RDF, and our own XUL UI format. While it’s served it’s job well it has long been unmaintained and has been a source of many security vulnerabilities, a few of which I’ve had the pleasure of looking into. It’s 13,000 lines of rather hard to understand code and tracing through everything when looking into security vulnerabilities can take days at a time.
It’s time for a change. I’d like us to switch over to a Rust-based XML parser to help improve our memory safety. We’ve done this already with at least two other projects: an mp4 parser, and a url parser. This seems to fit well into that mold: a standalone component with past security issues that can be easily swapped out.
There have been suggestions adding full XML 1.0 v5 support, there’s a 6-year old proposal to rewrite our XML stack which doesn’t include replacing expat, there’s talk of the latest and greatest, but not quite fully speced, XML5. These are all interesting projects, but they’re large efforts. I’d like to see us make a reasonable change now.
What do we want?
In order to avoid scope creep and actually implement something in the short term I just want a library we can drop in that has parity with the features of libexpat that we currently use. That means:
A streaming, sax-like interface that generates events as we feed it a stream of data
Support for DTDs and external entities
XML 1.0 v4 (possibly v5) support
A UTF-16 interface. This isn’t a firm requirement; we could convert from UTF-16 -> UTF-8 -> UTF-16, but that’s clearly sub-optimal
As fast as expat with a low memory footprint
Why do we need UTF-16?
Short answer: That’s how our current XML parser stack works.
Slightly longer answer: In Firefox libexpat is wrapped by nsExpatDriver which implements nsITokenizer. nsITokenizer uses nsScanner which exposes the data it wraps as UTF-16 and takes in nsAString, which as you may have guessed is a wide string. It can also read in c-strings, but internally it performs a character conversion to UTF-16. On the other side all tokenized data is emitted as UTF-16 so all consumers would need to be updated as well. This extends further out, but hopefully that’s enough to explain that for a drop-in replacement it should support UTF-16.
What don’t we need?
We can drop the complexity of our parser by excluding parts of expat or more modern parsers that we don’t need. In particular:
Character conversion (other parts of our engine take care of this)
XML 1.1 and XML5 support
Output serialization
A full rewrite of our XML handling stack
What are our options?
There are three Rust-based parsers that I know of, none of which quite fit our needs:
My recommendation is to implement our own parser that fits the needs and use cases of Firefox specifically. I’m not saying we’d necessarily start from scratch, it’s possible we could fork one of the existing libraries or just take inspiration from a little bit of all of them, but we have rather specific requirements that need to be met.
This is a continuation of my Are They Slim Yet series, for background see my previous installment.
With Firefox’s next release, 54, we plan to enable multiple content processes — internally referred to as the e10s-multi project — by default. That means if you have e10s enabled we’ll use up to four processes to manage web content instead of just one.
My previous measurements found that four content processes are a sweet spot for both memory usage and performance. As a follow up we wanted to run the tests again to confirm my conclusions and make sure that we’re testing on what we plan to release. Additionally I was able to work around our issues testing Microsoft Edge and have included both 32-bit and 64-bit versions of Firefox on Windows; 32-bit is currently our default, 64-bit is a few releases out.
The methodology for the test is the same as previous runs, I used the atsy project to load 30 pages and measure memory usage of the various processes that each browser spawns during that time.
Without further ado, the results:
So we continue to see Chrome leading the pack in memory usage across the board: 2.4X the memory as Firefox 32-bit and 1.7X 64-bit on Windows. IE 11 does well, in fact it was the only one to beat Firefox. It’s successor Edge, the default browser on Windows 10, appears to be striving for Chrome level consumption. On macOS 10.12 we see Safari going the Chrome route as well.
Note: For Safari I had to run the test manually, they seem to have made some changes that cause all the pages from my test to be loaded in the same content process.
We can see that Firefox with four content processes fares better than Chrome on all platforms which is reassuring; Chrome is still about 2X worse on Windows and Linux. Our current plan is to only move up to four content processes, so this is great news.
Two content processes is still better than IE, with four we’re a bit worse. This is pretty impressive given last year we were in the same position with one content process.
Surprisingly on Mac Firefox is better than Safari with two content processes, compared with last year where we used 2X the memory with just one process, now we’re on par with four content processes.
I included Firefox with eight content processes to keep us honest. As you can see we actually do pretty well, but I don’t think it’s realistic to ship with that many nor do we currently plan to. We already have or are adding additional processes such as the plugin process for Flash and the GPU process. These need to be taken into consideration when choosing how many content processes to enable and pushing to eight doesn’t give us much breathing room. Making sure we have measurements now is important; it’s good to know where we can improve.
Overall I feel solid about these numbers, especially considering where we were just a year ago. This bodes well for the e10s-multi project.
Test setup
This is the same setup as last year. I load the first 30 pages of the tp5 page set (a snapshot of Alexa top 100 websites from a few years ago), each in its own tab, with 10 seconds in between loads and 60 seconds of settle time at the end.
Note: There was a minor change to the setup to give each page a unique domain. At least Safari and Chrome are roughly doing process per domain, so just using different ports on localhost was not enough. A simple solution was to modify my /etc/hosts file to add localhost-<1-30> aliases.
Methodology
Measuring multiprocess browser memory usage is tricky. I’ve settled with a somewhat simple formula of:
Where a parent process is defined as anything that is not a content process (I’ll explain in a moment). Historically there was just one parent process that manages all other processes, this is still somewhat the case but each browser still has other executables they may run in addition to content processes. A content process has a slightly different definition per browser, but is generally “where the pages are loaded” — this is an oversimplification, but it’s good enough for now.
My definitions:
Browser
Content Definition
Example “parent”
Firefox
firefox processes launched with the -contentproc command line.
firefox without the -contentproc command line, plugin-process which is used for Flash, etc.
Chrome
chrome processes launched with the --type command line.
chrome without out the --type command line, nacl_helper, etc.
Safari
WebContent processes.
Safari, SafariServices, SafariHistory, Webkit.Networking, etc.
IE
iexplore.exe process launched with the /prefetch command line.
iexplore without the /prefetch command line.
Edge
MicrosoftEdgeCP.exe processes.
MicrosoftEdge.exe, etc.
For Firefox this is a reasonable and fair measurement, for other browsers we might be under counting memory by a bit. For example Edge has a parent executable, MicrosoftEdge.exe, and a different content executable, MicrosoftEdgeCP.exe, arguably we should measure the RSS of one the MicrosoftEdgeCP.exe processes, and USS for the rest, so we’re probably under counting. On the other hand we might end up over counting if the parent and content processes are sharing dynamic libraries. In future measurements I may tweak how we sum the memory, but for now I’d rather possibly under count rather then worry about being unfair to other browsers.
Raw numbers
OS
Browser
Total Memory
Ubuntu 16.04 LTS
Chrome 54 (see note)
1,478 MB
Ubuntu 16.04 LTS
Firefox 55 – 2 CP
765 MB
Ubuntu 16.04 LTS
Firefox 55 – 4 CP
817 MB
Ubuntu 16.04 LTS
Firefox 55 – 8 CP
990 MB
macOS 10.12.3
Chrome 59
1,365 MB
macOS 10.12.3
Firefox 55 – 2 CP
1,113 MB
macOS 10.12.3
Firefox 55 – 4 CP
1,215 MB
macOS 10.12.3
Firefox 55 – 8 CP
1,399 MB
macOS 10.12.3
Safari 10.2 (see note)
1,203 MB
Windows 10
Chrome 59
1,382 MB
Windows 10
Edge (see note)
N/A
Windows 10
Firefox 55 – 2 CP
587 MB
Windows 10
Firefox 55 – 4 CP
839 MB
Windows 10
Firefox 55 – 8 CP
905 MB
Windows 10
IE 11
660 MB
Browser Version Notes
Chrome 54 — aka chrome-unstable — was used on Ubuntu 16.04 LTS as that’s the latest branded version available (rather than Chromium)
Firefox Nightly 55 – 2 CP is Firefox with 2 content processes and one parent process, the default configuration for Nightly.
Firefox Nightly 55 – 4 CP is Firefox with 4 content processes and one parent process, this is a longer term goal.
Firefox Nightly 55 – 8 CP is Firefox with 8 content processes and one parent process, this is aspirational, a good sanity check.
Safari Technology Preview 10.2 release 25 was used on macOS as that’s the latest branded version available (rather than Webkit nightly)
Edge was disqualified because it seemed to bypass the hosts file and wouldn’t load pages from unique domains. I can do measurements so I might revisit this, but it wouldn’t have been a fair comparison as-is.
36 hours from waking to sleeping to get to London. Land at LHR, meet up with most of your crew, take a harrowing drive on the wrong side of the road with a semi-pro rally driver to a town on the outskirts of The City. Start with a full english breakfast, walk the High Street — serious question: does every town have a high street? I’m a fan — hang with our friend Bea for a bit, roll out for dinner, crash on an airbed. Slowly adjust to British accents.
Dinner Surprisingly decent Spanish tapas after a rather long wait. Portland’s more of an under-promise over-deliver place (45 minute wait, really takes 15), London’s definitely an over-promise, under-deliver place (10 minute wait, really takes an hour).
Step two: Travel to The Alps
Two life changing words: airport lounge.
Plan to wake up at 6am, Andrew wakes up earlier, hence we wake up earlier. I am tired. So very tired. Rally drive through what feels like the countryside, Foy remarks the trip is “pretty rural” — definitely charming, there’s even a little pub at fork in the road and nothing around it, this is basically what I’ve always assumed England was like. We’re going to LGW, new airport to check off the list. Today it’s easyJet, very affordable prices, Phil got us “speedy” boarding, I have no idea what that is but I’m assured it means no waiting with the filthy, filthy, masses. Quick trip through security, for some reason my laptop is set aside for enhanced interrogation, scanner dude seems equally confused, shrugs and runs it through again. High fives all around.
Two life changing words: airport lounge. Andrew’s been obsessing about working the travel system, and by god he’s figured it out. He gets the four of us into a lounge for freezies, we grab some solid buffet breakfast, caffeinated beverages, and I’m introduced to the wonderful world of bacon rolls, with HP sauce naturally.
An aside: this is not legit bacon, but what I call sad bacon, or I guess British bacon. It’s kinda hammy, it’s still good, but let’s not pretend it’s bacon. On the flip side, Phil refers to legit bacon as burned bacon. To each their own. Oh also the Brits call legit bacon “streaky bacon” and let’s be honest, that’s pretty darned cute.
Anyhow, decent breakfast, caffeinated, generally relaxed before our flight. Let’s do this. Chill out and read a bit at the gate, but oh yeah we have speedy boarding so off we go. Take that aforementioned filthy masses. Today we learn a bit about easyJet, primarily that it’s mostly awful, in discussions later with Phil I learn it’s basically a crapshoot which in retrospect is probably too generous. Back to speedy boarding: we get to walk down the ramp first to the tarmac, it’s super cold out, and you guessed it, we get to wait on the tarmac, me without my coat on. But we’re the first to wait on the tarmac, you know, instead of the heated tunnel, so yay? Okay we board and we get first row so I guess that’s nice. I’m next to some folk with a newborn, like I dunno, maybe a month old? It’s starts to cry, this does not bode well. Luckily this isn’t my first rodeo: headphones on, Deafheaven playing, Kindle out and I’m good to go. Takeoff is delayed, but they keep the door open so I can see my breath. The flight is uneventful, the baby is chill.
Arrive in Geneva and take a bus from the runway to the airport proper. We have to go through customs. This confuses me to no end, what’s the point of the EU if you have to go through customs. Now a bit of nerding out: Switzerland is, in fact, not in the EU while the UK is (for now at least). On the other hand Switzerland is a Schengen member while the UK is not. Hence customs. But I’m not an EU citizen so I get to slum it through the slow line, which whatever no big deal, because, yes you guessed it, I got a Swiss stamp in my passport, and really that’s what it’s all about. Baggage takes forever and combined with the delayed flight we’ve missed our shuttle, luckily they’ve sent another one. Still heavily jet lagged I grab a stupidly expensive ham sandwich and a can of paprika Pringles — really how I could I not — and off we go to France! Fall asleep for the pretty part of the ride, wake up as we roll up to our rental dusted with snow. This feels like a good omen.
Dinner Stroll downtown Chamonix, we’re pretty jet lagged and choose the first brasserie we can find. The place is pretty nice, I grab a local beer that’s not awful and we all get pizzas. Any pretense of blending in is lost.
Step three: Hit the slopes
A note for folks who have not been to Chamonix before, but maybe another ski resort. Chamonix itself is a sizeable place with a nice downtown area, plenty of shopping, food and whatnot. Very walkable. But this isn’t really a walk to the lift type situation, they have a decent and free bus system running to a bunch of lift areas that are covered by the super-mega-ultimate-pass or whatever the marketing people call it. Our rental is a five minute walk away from the main stop and given my downhill ski boot situation this is a good thing. I wouldn’t recommend being further afield than that, and yes you could technically drive to the slopes but they don’t have much parking and it’s the Alps so there’s an extra terrifying mountain weather factor. Unless you’re a semi-pro rally driver such as Phil just take the bus.
Sunday – Les Houches
I declare mutiny, no more lifts today
Wake up with a massive altitude headache despite copious amounts of water and peeing, find some paracetamol, verify that’s just British for Tylenol. Note to self: don’t drink beer your first day at altitude. Less brain dead folks check the weather, all agree Brévent is where it’s at. The walk over to the bus stop is pretty easy. Our over eager compatriots Phil and Foy — maybe just Phil, sorry Foy — can’t wait for the proper bus so we just take the next one that shows up. Turns out it’s going to Les Houches, which is in fact a ski area, so whatevs. Phil and Foy then bail one stop early — it’s too damned hot in this bus! — Andrew and I throw some bows and make it off the bus as well. Apparently we can catch a gondola here, but as it turns out it’s not really running and people are just standing in a line extending out of the building with sad blank looks. Maybe there’s a strike or something. Hike through a parking lot, slog up three flights of stairs in ski boots to a lift — my companions are snowboarders, it’s not quite so awful for them — Phil’s gone before I get up there — this is a theme — Foy’s accosting the gate, turns out his ticket doesn’t work, heads back down, Andrew and I wait. It’s cold, the snow is wet. Foy heads up, now our tickets don’t work. Le sigh. Slog down to the lift station and are informed our tickets need to be “connected to the internet,” this is such an absurd statement we can’t help but have a good laugh. Back up the stairs. This is the point I realize I’ve been living at sea level too long and want to die. Shit viz is declared by one of the Brits — this will also be a theme — completely overcast, wet snow falling, I learn my pants are not waterproof. The wind gets mean; this is awful. Everyone knows the solution to awfulness is food and warmth: lunch time! Bunch of cute little restaurants nestled around the slopes which is truly how I thought the Alps would be. All are full, quaintly dubbed complet by the French which sounds better than sorry you missed the lunchtime memo, these folks are in for the long haul and you’re SOL. I go ahead and hop in one, it looks promising, Phil says we just want drinks (we do not just want drinks), hostess wanders around a bit, checks in on some folks, walks past an empty table, comes back and informs us very frenchly that “no we cannot.” Next place is closed to us as well. Ski to the bottom to check out the base area. There is no base area to speak of, this is more a place with a lift where the road ends. There are two spots, neither with indoor seating, grumpily eat food outside. I declare mutiny, no more lifts today. Not much complaint, we go home.
Dinner Andrew makes bolognaise.
Monday – Brévent-Flégère
Extreme chills, huddle by the heater. Everything hurts.
Okay Brévent for reals this time. Phil’s still bus adverse so it’s declared we’ll hike up a hill to station, doesn’t look that bad on Google maps! Oh right, it snowed last night which is exciting in the abstract sense, but said hill turns out to be a fun trifecta of snowy, icy, and steep. There isn’t a sidewalk. This ski boot situation is making me consider taking up snowboarding. Anyhow the hike up is no fun, Andrew gets slightly sideswiped by a struggling delivery van. Phil’s disappeared from view, I assume he’s most likely already done a few laps on the mountain. We arrive to somewhat shit viz and experience our first cloudening, which, for the uninitiated, is when you become literally ensconced in a cloud and shit viz becomes near zero viz. Stick it out for a full day, wrap up, drinks at base of the gondola at a self declared slow food bar which somehow doesn’t serve food, but most certainly serves beer. Phil and Foy’s true colors shine as they show us how real Brits drink: which is copious amounts of (admittedly low alcohol) beer along with excellent stories and lots of laughter. I’m starting to feel pretty crappy and lag behind on the drinking, Foy takes pity on me swaps out my untouched pint for his empty glass. Foy you’re a good man. After some shenanigans getting home (impromptu roadside snowboarding, yet another early bus departure) Phil and Foy get deposited at the house. Andrew and I go to the pharmacy and score some cough drops. Head back to gather folks for dinner, alas they’ve retreated to their rooms. Andrew queues up Dirk Gently’s Holistic Detective Agency and I zone out for a while, Phil reappears spry as ever. I swear this guy’s got a fountain of youth hidden somewhere. Collapse in bed for a fever plagued night. Extreme chills, huddle by the heater. Everything hurts. Super hot, lie down on cold tiles for a while, ache, repeat.
Dinner Pringles.
Tuesday – Les Grands Montets
I am sick, it’s snowing, this is dumb.
I’m dead to the world. Andrew knowingly comes in my room, hands me some cold pills and cough drops. Get to the mountain okay, a bit snowy. Shit viz, can’t tell where bumps are. I take one run, stare fixedly at the falling snow up higher and decide not to go up to the top with the rest. Pretty loopy on cold medicine I do a few lower elevation runs, eventually topple over comically trying to get to the chairlift. Others come down, declare it the worst ride of the trip. We go home.
That night Andrew and I pop in a pharmacy and after a bit of negotiation get some real drugs. Phil marches ahead and we find him ordering dinner at a place that only has outdoor seating. I am sick, it’s snowing, this is dumb. Finally get real sleep with the help of some sort of French sedative syrup.
Dinner Half a burger on a hard roll.
Wednesday – Nope, nope, nope.
… wake up in the middle of the night drenched in sweat, swig more syrup
Andrew and I give up, stay home. Others go somewhere, more shit viz, come home. My new cold medicine crack pills make me feel somewhat better if not tweaked up, clean the place while Andrew sleeps and insist on using the dishwasher because, well I don’t have one at home, and really it’s the small things. Lounge and read, Foy declares I look like a stoner. This is ok.
Convince the crew to eat late lunch of soup aux pois. Simple food, it does the job. Folks want to wander the town. In wet falling snow. Fuck that. Head home and stop by a legit French butcher, use my memorized phrase to no avail, they have no chicken. Get it from the supermarket like a goddamned animal. I make dinner, Foy declares it Game of Thrones-esque, people seem to like it well enough. This is the best I’ve felt so far, but still pretty shitty. Sleep the sleep of the dead, wake up in the middle of the night drenched in sweat, swig more syrup, go back to sleep.
Dinner Chicken thighs over onions, potatoes, carrots in a red wine reduction.
Thursday – Brévent (again)
We all buy tickets for Italy tomorrow at 9:30. This is optimistic.
Wake up at 9:30 drenched in more sweat. Feel amazing, I guess this is what a fever breaking is like. We realize it’s too late to book tickets to Courmayeur (it’s in Italy, you need to book a bus ticket ahead of time). We take the easy route and go to Brévent, take the fucking bus this time. Somewhat sunny, only a few cloudenings, quite fun. Get to take world’s shortest funicular, day is made. Phil and Foy split off, it’s a bit easier just coordinating two people. Nice fast runs, not too crowded. Sit down lunch of saucisses frites, which turns out is two hotdogs on top of fries. Drink vin chaud for the first time, it’s quite nice. We all buy tickets for Italy tomorrow at 9:30. This is optimistic. My phone dies the final death, screen is completely hosed.
Dinner Raclette at tourist trap (we knew that ahead of time). It’s ridiculous: we’re going to scrape molten gooeyness off half a wheel cheese sitting under a tableside broiler. Phil gets there first and scores us a spot in the “cave.” Foy, Andrew, and I drink a bottle of wine chosen for its characteristic of not being the absolute cheapest. Foy being Foy can’t just have standard raclette, he wants fries too (which to be honest sounds pretty good) and somehow manages to get a steak tartare instead (which happens to come with fries). Luckily he and Phil are fans so no worries. The night goes on and Phil is not well, our plague is now his plague. It’s declared we must get the local specialty, génépy, for desert. It’s some sort of liqueur made of a local flower, I think. Not bad, tastes like sugar free mouthwash. Andrew can’t hang, he chases each sip which a large slug of water. He’s a trooper.
Friday – Courmayeur
I don’t know how young Eric made it down.
– Foy
Off to Courmayeur, Phil is non-responsive so we leave him to his own devices. Actually go into town for breakfast. Get legit coffee and hit a patisserie for a delicious croissant. Bus ride is interesting, nice views and we take a tunnel through Mt Blanc from France to Italy. The bus stops at sketch completely abandoned parking lot for some skyway thing, we don’t get off. We should have got off. Land in town, it’s a pretty swank place. Andrew pulls up Google maps which yet again directs us up a hill, get passed by posh cars, find a gondola. Foy gets chastised for not having the proper paperwork but we all get in. Oh man the snow is great! Sadly we’re in complete cloud most of the time, worst viz yet. We eat at the Italian ski slope version of a food cart pod. I fumble through ordering in half English, half broken French, a passable grazie and get what I think is going to be vaguely polenta with meat. Yolo. Ends up it’s pretty good. I decide we should take cable car to top because, yay, a slightly new form of transport. This is a bad idea, one run but you can only see maybe 5 feet ahead, this is 100% cloudening, no snow definition whatsoever. I glance back and Andrew’s stopped and is waiting for Foy. Foy eats it because we literally can’t tell which way is downhill, Andrew starts to laugh and then topples over himself. I make it down to next lift area, Foy and Andrew take longer. Foy states: “I don’t know how young Eric made it down”. I’m occasionally referred to as young Eric, I assume this is good. Lower down we have a good time travelling around the mountain, some playing in trees at which point Foy disappears for twenty minutes or so. He’s pretty sure he found some sort of an animal trail. Pause for more drinks and then seek out the mythical gondola to take us back to the bus. We finally find the right spot and head back to the base area on the other side of the highway from the fabled skyway. Scratch our heads a bit and figure out there’s an underpass, get back early and manage to scam our way on to an earlier bus. Go back through the tunnel, and sweet jesus, we’re blinded by sunlight and blue skies on other side. Fuuuuuuuck. Get home, Phil is still dead. Buy a new phone online, pack up, fall asleep early.
Dinner Andrew and Foy indulge me and we go to a place with fondue, they get steaks. I am happy.
Step four: Peace out Alps
easyJet: Life is Pain
I cook everything left in the fridge for breakfast. Leave our leftover beers for next crew. It’s the first blue sky morning, we’re leaving. Chill at the lounge in Geneva, eat some more decent buffet food, free beer, not a bad exit.
We wait out a delay in the lounge and then sprint to the gate when they change the departure time. Roll back to step two, remember easyJet? Yeah they still suck. They load us onto a standing room only bus, drive us over to the plane and then make us stand in the bus for half an hour. Load us up and guess what? Yeah the plane is still delayed but they wanted to load us up “just in case.” Sit on the tarmac for a few hours. I come up with a few new slogans for easyJet as we wait such as: easyJet: Because fuck you, easyJet: Life is Pain, and easyJet: That tea’ll be 2.50. Finally make it to Phil’s place around dinner time, poor Foy still has to drive several hours back home to Leeds.
Dinner Eat out at a curry shop (I’m Britishing so hard right now) with our friend Maggie. I approve.
Step five: Down day in London
We’ve out-Britished ourselves this time.
Wake up and Phil’s already gone, he and Maggie are off to Finland for a couple’s ski weekend. Dude’s a machine. Andrew does some due diligence and arranges a mini-cab for us tomorrow. We take a train (or is it an overground subway?) into London and get off at a station that’s literally on a bridge, more novel transportation for me. We hit up the Tate Modern which is great, use a pedestrian only bridge to cross the Thames, check out a fancy suit store Andrew likes. Take the tube to meet up with Bea again at ginormous comic book store, grab fish and chips (Britishing even harder right now) and have a generally delightful time. Andrew and I attempt to go to the oldest continuously operating pub in London, it’s closed. Go to a pub that’s open instead, it’s clearly better. Get to take a double-decker bus home, top deck, duh, I show some restraint and don’t take the front seat. After dinner we finish packing and get ready for an early morning drive to LHR.
Dinner Grab another pint at the pub that’s been declared reasonably safe, hit up a kebab shop. We’ve out-Britished ourselves this time.
Epilogue
An uneventful drive to LHR, one more lounge and I’m off to Portland again. At this point all that’s left is a lingering cough and a righteous stomach bug that won’t leave me collapsed and shivering on the bathroom floor for a few more days.
The trip was mostly a bust in the skiing and feeling like a human being sense, but it was nice to get out of town, detox from work, and see some good friends. As a rule I’d say don’t bother with Chamonix unless a) you live in Europe and b) you can schedule a trip at the last minute so you know it’ll be decent weather. I might go back in the summer though.
Windows build sadness aside, as I went back to my old standby, Linux, I started to wonder about its build times. Am I doing my absolute best here? Lets do some tests. I’ve been using Clang ever since I need to do asan builds — yeah, yeah I know GCC supports this as of version xyz, but I also get fun things like our static analysis plug-in and pretty error messages.
As an aside: to install Clang I just downloaded the x86_64 Ubuntu build from LLVM’s official site. Nice and easy. For GCC builds go to the gcc site, check out their sweet binaries page where they let you know you can build it yourself you filthy animal and if you’d please download the source from a mirror that’d be great. If we’re evaluating on website alone GCC has already lost.
So the results of doing clean builds, no ccache:
Compiler
Clobber build time
clang-3.9.0
18 min
gcc-5.4.0
21 min
gcc-6.3.0
22 min
Build system specs: Ubuntu 16.04 LTS, 8 core Intel Core i7-4770 CPU @ 3.40GHz, 32 GB RAM, SSD, example of .mozconfig used.
Why those versions? I was already using clang 3.9.0 (and it was the latest release at the time), gcc 5.4.0 is what ships with Ubuntu 16.04, gcc 6.3.0 is the latest release.
Now you may argue that the resulting executable of foo bar baz is going to be smaller/faster/better. And I’d argue as a dev I don’t care. I just want to build quickly so I can run my test and repro a bug. Generally speaking all my builds are debug, asan-enabled, dmd-enabled. They’re never going to run fast.
I’d love to hear your thoughts on how to get optimal build times of Firefox on all platforms. Maybe I can build with better settings — for GCC I just used a vanilla config I found on another blog. I tried doing a PGO build of clang trained on building Firefox, but I ended up getting worse times with that. Odds are I was doing it wrong, but it would be pretty cool if Mozilla could produce them in our build infrastructure; I filed bug 1326486 for this.
It would be interesting to see the gcc and llvm folks use our codebase as a testing ground for build time performance — maybe it’s time to fire up arewecompiledyet.com.
Recently I was looking into doing more dev on a Windows machine — you know, because basically all our users on are on Windows — and ran into the sad, sad fact that a clean build is going to take me 40 minutes on a brand new “pro-level” laptop. That’s just unacceptable. I filed a bug for that, the response was roughly “yeah we know, get a better machine” and was closed as invalid. I guess I get it, as far as a short term solution that makes sense, but this is pretty sad. It’s sad because on my 3 year old MacBook Pro I get 22 minute clean build for OSX. On my Linux desktop with similar specs I’m seeing 18 minute builds (this is without ccache).
I should note we’re talking about clobber builds here for a platform dev poking around in C++. A lot of work has been done to make builds for frontend folks super fast with artifact builds (we’re talking a minute or two). We could also look at iterative builds, but often I’m working in heavily shared files that it doesn’t really matter (strings etc).
Not all is lost
It sounds like there’s some work to make it better although I don’t have any bug numbers, and have no clue what the priority is. It was also pointed out that sccache is going to work locally on Windows which should be a big improvement, it will be interesting to see some actual numbers.
I’m not sure if there’s more I can do to improve things as-is, here’s what I’ve done so far:
– Disabled malware scanning for my dev directory
– Configured the laptop to “performance” mode
Just piping the build output to /dev/null was actually rather effective, it shaved about 5 minutes from the build time. This isn’t a great solution though as I’d like to see progress and warnings. Another suggestion was to disable parts of Firefox that I don’t need, unfortunately I often tinker with files that are used throughout the codebase so I can’t disable things without worrying about breaking them.