Hacker News new | past | comments | ask | show | jobs | submit login
Lies, damn lies, and front-end tracking (alexeymk.com)
183 points by AlexeyMK on July 17, 2020 | hide | past | favorite | 96 comments



>The less JavaScript (especially third-party) you have on your landing pages, the better. It’s a better customer experience, and it improves your page’s conversion and quality score.

Really wish I read this more. And not just for the landing page.


I'm surprised we haven't run into this before now. JS performance on android has been stagnant for a while now, yet we still see the same awful JS-heavy applications.

This doesn't bode well for the end of Moore's law. That is Moore's law in the performance sense, i.e. guaranteed future single-core performance increases [even if small], not Moore's law as in the the doubling transistor density.


Companies that keep their JS footprint small will have faster-loading pages, and that translates to non-trivial increase in profits. Sadly, people running businesses don't understand this, and tech teams certainly aren't going to voluntarily make their jobs harder without word coming down from above. So do your company a favor - educate those in charge about how much page load time matters.


This isn't strictly true.

Most B2B SaaS's, which I would bet is the most common front end developer on HN, do not see significant benefits from fast loading. They will see issues with slow loading, and they can't ignore bundle size, but the benefits to development speed from a pure JS front-end from this kinds of sites is realised as value by the client.

B2C SaaS, and you start to see a different story, largely due to the wider variety of devices and more mobile friendly focus.

Non SaaS, product sales or news focused businesses facing joe public are likely as you say.


> tech teams certainly aren't going to voluntarily make their jobs harder without word coming down from above

SourceHut does this. It's not a categorical impossibility, but I agree devs are generally going to be inclined to continue with 'business as usual'.

Does it really make dev work that much harder though? I'm not really qualified to comment. Presumably it's going to simplify things in at least some ways.

> do your company a favor - educate those in charge about how much page load time matters.

There you have it - the people making the case for less JavaScript may be the devs themselves.


Amazing that this isn't universally known at this point.


Unfortunately, Amazon S3 and Akamai CDNs will come to their rescue in still making their pages load faster. As a plebeian, all my efforts at keeping my pages light will go in vain as I simply can't afford the CDN infrastructure of these tech giants.


Don't today's bloated, slow-loading pages already use these CDNs?


> yet we still see the same awful JS-heavy applications.

Isn't Google's AMP just a bunch of Javascript too? So even their "fix" is just perpetuating the issue


I fucking hate AMP so much. Wish someone ended it


I don't like the idea of AMP, and I was really concerned when I first heard about it.

However, ::knock on wood::, I don't seem to have encountered it that much.

Maybe it's because of how I browse, and that I don't follow "football" news, but I almost never see the URL in my browser, nor links to it anywhere.

How do people typically come across AMP content?


I came across AMP regularly from Google search on mobile. I switched to Firefox on Android and DuckDuckGo and haven't noticed it since. Not why I switched, but a nice side effect.


Ah! Thank you for pointing this out. You just cleared something up for me.

I use Firefox on my mobile and I always wondered why people keep complaining about Amp, since I never see it.


Why?


third-party js no less.


> Moore's law in the [single-core] performance sense…not Moore's law as in the the doubling transistor density.

Moore’s law is only about the latter, it says nothing about performance.


Yes, but in practice the two have been strongly correlated for a very long time.


Abysmall programming has been compensating for any Moore's Law benefits since a while now, but the present android/npm generation has surpassed so much in that ability that at times, it seems they have actually reversed the Moore's Law with their incompetence!


> inform Google/Facebook via their server-side APIs when feasible, instead of trying to load a pixel

I was wondering when this was going to be the norm.

Can't block third parties for privacy if the first party talks to them behind the scenes.

(If I understand what this means)


I imagine that the vast majority of users of Facebook SDKs are not actually motivated to inform Facebook about anything. They’re using the SDK’s to enable login, likes, etc. If this moved to the server side, they might only inform Facebook about actions that involve Facebook.

edit: for the specific case of conversion tracking as mentioned in the article, I would hope that only referred by Facebook/Google would be reported.


A good portion of the SDK usage in mobile apps is not nefarious. They're embedding it for advertising conversion tracking which would respect the "limit ad tracking" setting in your device's settings.

The problem is that beyond whatever functionality the developer of the app intended to use the SDK for, it also does its own thing of stalking the user even if the app doesn't end up using any SDK features at all.


Exactly my point. The third-party app is not nefarious, but the SDK is. A server side API would have a better chance of doing the useful bits without the nefarious parts.

I doubt many competent server operators would be willing to run Facebook-supplied binaries on their precious servers, especially given Facebook’s track record of SDK crashes. (Not to mention PCI requirements, etc.)


We ran into the same exact issue at my previous employer. It's scary how similar it was to your situation given we also moved our site over to Next.js and we were a competitor to Opendoor. It also wasn't the first time I've run into bad metrics in a legacy app making it appear to convert better than a new version.


Sshhhh. So many people will lose so many bullshit jobs if this gets discussed widely. There are entire tertiary industries built on top of front-end tracking and most people with a modicum of analytical ability know it's turtles all the way down.


I thought this wasn't talked about because we all understand the limitations, accept them, and still find value. This sounds like a maturity thing, not a tool thing.

Lots of user experience professionals find utility in data that serves discovery and design sprints. Lots of product management professionals find utility in data that serves validating incremental feature engagement.


What do you mean by "turtles"?


A turn of phrase:

https://en.m.wikiquote.org/wiki/Turtles_all_the_way_down

In this context, I'm referring to the "trust us, it works" attitude that people who sell/provide analytics on front-end tracking use to justify their effort/expense. Based on experience, when you point out obvious problems doing analytics with JS, they keep repeating the mantra "but you _need_ front-end analytics" infinitely.


The headline, the problem and the solutions are all different things.

- Front end tracking did not lie, the people tracking it were not aware. If opendoor was spending on Ads, the marketing team would be the first to see a disparity between the clicks on an ad network and pageviews on their end.

- As such this is also a reason why you need to check your server hit logs with your Analytics.

- Where is the author is right it Frontend numbers being incorrect largely due to blockers

- His understanding of bounces is also incorrect. Bounce is when there is no second "interaction event", many marketing folks fake-fix the bounce rate, by sending a scroll event as an interaction event.

- I always tell people that analytics numbers are "signals", not "metrics", they are not accurate enough to be called "metrics"


If these numbers were significantly changed by a server side page view, I'd bet their page view wasn't firing any earlier than the top or bottom of the body tag, or maybe even on window ready. Anyone that's spent any real time outside of dropping in a snippet understands the implications of users navigating before hits fire, and it sounds like the team just didn't have that experience.

Like you, I also work to make sure people understand that client side data is not necessarily accurate, but still refer to it as metrics and KPIs, just with the caviot that we should bank on consistancy over accuracy, keeping our analysis geared towards behavior and trends versus most business people's immediate desire to take it to an operational/business intelligence type place.


I don't think this is an issue with the on-site analytics, but with the quality of the traffic and website performance. If you make sure that:

1) Your site loads in under 3-4 seconds for any user.

2) The user is interested enough to wait 3-4 seconds until the page loads.

Then most issues will be solved.

The problem with ads in many cases is that the traffic they send is of very low quality or just bots. In the end you already know from your ads provider how many users they say they sent and you should always use that when calculating ad conversion rate.

Also note that using Cloudflare will count as bounced users who never actually even tried to load your page (bots, crawlers, scrapers, all HTTP requests).


It's amazing that 3-4 seconds is considered a reasonable target in 2020 when 8 seconds was a target on freaking dialup. It's like having a Ferrari and considering outrunning a horse a success.


No one thinks that's a reasonable target. That's insane. I don't know who that person is or what he's in charge of, but that's the fastest way to being a dead company.

100 ms max on backend processing, 500 ms max on first time to interaction. Client connection speeds matter, obviously. 3-4 seconds is get your fired ass out of here in my world.


3-4 seconds includes the DNS resolving, downloading HTML, CSS, JS, media, parsing them, creating the DOM, layout, calculating styles, rendering, etc. It's very hard even for a static imageless site to load in under 1 second for low-end mobile device.

That being said, for good UX the target should definitely be under 1 second to interactice, but even on desktop, if your users can't wait 3 seconds after they click the ad, they are not actually interested at all in your product.

People are also used with waiting a bit after they change sites/applications when they initiate the transition, so transition feels faster than it is. It starts feeling slow when you are already on the blank page, having time to look at the loading spinner.

I'm not saying slow is good, but I highly doubt that you can make any site have 500ms TTI for any user (they could easily have 150ms ping to your closest server). I assume that TTI means having access to all website functionality, not some simplified placeholder UI.


If users are used to waiting, it's because people like you have trained them to be. You list all the things that go into rendering a usable UI as though we don't all know that. Users probably don't, and that sounds like a lot. It sounds like the kind of excuse for shitty load times that PMs use when people do complain about slow websites.

And you can doubly tell how much of a PM you are by assuming that everyone is accessing your product by ad and how much you don't care if they are interested after 3-4 seconds. That's a lot of money you're leaving on the table.

Low-end mobile devices are not the right metric for TTI.

Most people don't realize how fast the web can be because they don't bother to benchmark until they are well into the tooling choices and feature development and maybe someone somewhere decides to complain about speed.

A functional Pyramid app with dynamic templates using Mako, and some db queries via SQLAlchemy can fully load in under 100ms.

The bottom line is that speed is a feature. It's not just one feature; it's your most important feature if you want users. It's more important than developer productivity; it's more important than any other feature. You don't know this because you work in a b2b/enterprise space where users don't have a choice. Their managers make the choice. But if you ever work directly on consumer products you'll find this out really quickly. The difference between ~.5 sec TTI and 4 sec TTI is millions of revenue per minute in eCommerce.

But even though it's b2b doesn't mean it has to be bad and suck people's time away from them. You have the option to learn something and change your priorities. But you won't.


Wow, a personal attack based on assumptions.

I just want to say that I'm not a PM and I don't even work in a corporation. I have worked my entire career focusing on performance (learned programming for and during algorithmic contests were every millisecond really matters, made open-source contributions that vastly improved performance, worked in the gaming space where even users care a lot about performance, and working in the analytics space, where I created a high performance, lightweight alternative to some heavy and data demanding alternatives).

I respect your opinion, but my comment was based on experience, knowledge of human behavior and analytics and A/B tests ran on millions of users. I do agree that it might differ from product to product, but in the majority of cases you just get to a point where it is "fast enough", users no longer notice and spending more time on optimizations takes too much effort for the benefits it brings.


Whoever expects a site to load in under 500 ms has only a vague idea about the real world, and probably never tried enabling connection throttling in developer tools.

3-4 sec TTI is to be expected from a fully static, server-rendered website delivered from an AWS CF edge through a realistic sloppy mobile connection.

Even at 5+ sec TTI you will probably have green PageSpeed with rating in the nineties if everything else is good.


What industry sector are you in?


I've worked in both b2b and consumer spaces. Finance, Market Research, Payments, Education, and Real Estate.


Then why the fuck is the average website measured in megabytes


To be fair, they said 3-4 s for any user. Hopefully most users will see much better times.


Every time I see an article about the growth of data usage, I thing that data usage in bytes is increasing much, much faster than data usage in terms of end-user utility.


Twitter weighs about a MB, a tweet is at most 280 characters, and typically of negative utility to the user and society. So yes.


I hope that with "for any user" your parent means something like p99 load times which include people on GPRS connections.


I dont know where you live but in most countries gprs will not be included in p99. The few cases of mobile usage in p99 will be lte and the different 3g connections.

This may be different for mobile apps.


Live in a world where most people connecting to my website dont have reliable internet connections and low quality mobile is actually pretty standard.


The train Wifi in German trains feels like communicating through a tin-can-telephone with the servers as soon as you leave the city.


When doing research on QoE and QoS for page and video loading i remember the upper threshold of QoE being ~100ms, and every 100ms delay would have significant impact on sales. 3-4 seconds would be considered as a joke back then.


I'm just going to go out on a limb here and assume you're a product manager for atlassian.


You nailed it. That is probably the most likely correct conclusion. I mean, it almost must be that way. I do not know of any other "service" (more like disservice really), with so many products, which fails so terribly to get to a minimum standard in usability and page speed. Even the login in Atlassian is f'ed up, and you cannot login with sane ad blocking. I have to create a separate browser profile and start FF with profile manager each time, so that I can choose, all just to keep the Atlassian infection at bay.


I don't know, I find the MS/live/hotmail/office.com/outlook login thingy quite entertaining (will it hop from more servers than the standard ttl TCP packet value ?) ^^


> standard ttl TCP packet value

Nit: TCP doesn't handle TTL, IP does.


To be fair, this is a blight that infects all b2b/enterprise platforms. The devs and PMs don't care about speed because they aren't punished for being slow. They don't value speed as a feature because they don't have to and don't lose customers because of slowness.

These deals are made by people who don't use the software because they passed some checklist of features and security audits, not because they are good.

Consumer websites, blogs, and especially eCommerce websites can't get away with that without losing users almost instantly.

The moral of the story is that if you are already a big, wealthy company with a big reputation, you can afford to be shit. If you aren't, you're making a big mistake with 3-4 sec load times.


I worked with Alexey on this project. It’s pretty straightforward to filter out bots (either before send, or in analysis later). For our traffic, it was mostly commonly known bot user-agents. I’m also pretty sure malicious bots get blocked by Cloudflare before hitting the Cloudflare workers.


When dealing with ads, most "bots" are actually clicks coming from click farms, which are from real devices. I'm not sure what's the best way to filter those though.


Not trying to be provocative here, but I've read on this subject for years and the any time I've seen it measured it points to your assertion here as being deeply deeply wrong. I don't have the energy to gather the links, but pretty much everything written by anyone credible concluded that _milliseconds matter_. Don't take my word for it, though.


Wouldn't the B variant show higher session count? If your A/B testing tool doesn't detect imbalances in cohort size I would imagine you have a bigger problem, since it's easy to accidentally measure the A and B groups differently.


Important takeaway is they rewrote their website from scratch before even having the A/B testing. Probably one of the best arguments I've ever seen against writing bloated website code. We now know that software can hit a point where it's so slow, that it produces false metrics about its own slowness. Imagine how many people are still out there, who didn't write website B, and are thinking, oh my bounce rate is fine, I don't need to invest money in performance, since the numbers are telling me people don't care. Folks who trust numbers at face value, don't always know what they don't know.


author here

> We explicitly only changed the infra which served our landing pages, and kept the content - the HTML/CSS/JS - identical. Once the new infra was shown to work, we would begin to experiment with the website itself.

- http://alexeymk.com/2020/07/14/lies-damn-list-and-front-end-...


Although interesting story, it seems the main issue was that the author never sanitized their analytics in the first place and blindly relied on data that was never confirmed before. I've seen this as a quite common phenomenon. No piece of software does magic and they all require certain certain conditions to work as expected. In client JavaScript, these conditions are simply harder to grasp but they can be studied.


There's a fourth reason not to do this that isn't mentioned in the article: tracking is unethical. Don't do it.


So tracking your bounce-rates/conversion rates to know if your marketing budget is spent wisely is unethical?


If tracking wasn't useful nobody would do it.


That’s like saying if homeopathy wasn’t useful no one would do it. Both homeopathy and ad tech are “useful” to someone, but most benefit accrues to the seller, not the buyer.


If you go one step further, and assume that the product is good then tracking benefits the buyer, because it can make the company survive, therefore sell the product to the buyer longer.


Though this article is not recommending removing tracking, but rather triggering your tracking from the backend or edge, because that way the user can’t stop it from working! I confess that in some ways I prefer tracking to conventionally happen on the frontend because then I can block it.

(Yet even so, backend-triggered tracking is probably less privacy-intrusive than frontend-, even if you’re sending the data to Google or Facebook or whoever.)


The whole idea of "tracking" is to collect data from users to 1: make them identifiable 2: add more data to a collection about them 3: run profiling on that data 4: manipulate them based on assumptions about the profile.

More server side data harvesting will not make that less intrusive.


As a thought experiment: I run a web store, and I sell funny hats. I’m looking to sell more hats.

Is it unethical if I keep track of how many people buy which hats, so I know which kinds of hats sell best?

What about if I keep track of what hats each of my customers buys, so that I can tell which hats I might want to bundle together into hat packs?

Maybe I’m not super technical, so I pay a 3rd party service to analyze my transaction logs and tell me the above answers.

Or I realize some hats aren’t selling, so I check to see if people are going to that page and just not purchasing, or never going to that hat’s page at all.

To say that sites use tracking to “manipulate” users is playing pretty icky word games. Yes. The purpose of all marketing is to “manipulate”. So are opinion columns in newspapers. Job interviews are an employer manipulating a candidate and a candidate manipulating an employer. Manipulation is just a scarier-sounding word for “persuasion”.


We are talking web tech user tracking - this is not about "keeping track how many hats you sold".


I’m selling my hats from an online store. How is that not “web tech user tracking”?

If it makes the analogy more palpable, I’m selling virtual hats for my video game.


You are the one playing a word game, arguing semantics. I added the "user" prefix to clarify, yet you insist on comparing apples and horse-apples. I understand your angle, yet the whole discussion is about horse-apples, so i do not see your target. My guess is "making the latter sound more palatable". Take your small "the well known term used in this thread and source article is shit" victory and be happy, you are not getting more.


I don’t get any value out of whatever weird victory you’re claiming I’m aiming for. I posed the analogy because I’m curious how people here view different kinds of tracking, and at this point I’m not happy, I’m sad that this thread became a weird attack on my intentions rather than an actual discussion.


there are no meaningful "different kinds of tracking" in this context. The word is well known and used. Arguing that there is a theoretical subset of a more general definition of the word, for which the ethical criticism of the actual common market reality that the word references in this context, may not apply, is semantically possible, but just "playing word games"


I think it is possible to do tracking responsibly. I don’t really see a problem with server-side anonymized tracking used to improve the product. (But yes, the blog author also talks about FB and Google pixel tracking, which is very different)


It's against Google's terms of service to collect personally identifiable information so it's anonymized there, too. It's the same -- you can collect data server side via the measurement protocol via the same hit parameters. Either way, you can be responsible about measurement.


Parker Thomson, aka StartupLJackson, had a great view on this. Would you rather see a ton of ads that are completely irrelevant to you, or see ads that you actually need?

Tracking that makes your life better is useful. If Opendoor knows you are looking for a new house in Arizona, and they can show you just the right house before it's sold to someone else, isn't that fantastic?


The point of ad tracking is not to show ads that you need. It's to show ads that are likely to make you take the actions the advertisers need you to take. If the trackers determine that you have a gambling addiction and serve you ads for gambling sites, they're working as intended, but they're not helping you. If advertisers determine that their overall profits are increased by not showing ads for houses in desirable neighbourhoods to certain demographic groups, that's not good for the people in those groups.


Honestly? I'd rather see more ads that are irrelevant. Ads are already an attempt at manipulation. I would feel better knowing that everyone else is seeing the same shit I'm seeing. That I'm not being pandered to. That there are things that are valuable to others that I have no personal value in.

That's part of living in a diverse world.

I don't know about you, but I don't want a myopic world for myself. The perfect ad targeting system removes my agency; decides what I want and can afford on my behalf; makes my world significantly smaller.


And I'd rather see more search results that aren't tuned to me. Down with algorithms! Down with personalization!

I'm surprised how much hate there is on HN - the entire future of technology feels like it's heavy on data, personalization, and prediction, but a lot of comments read like a parent or grandparent that was targeted and is loading up their shotgun.

The technology isn't the enemy, some of the use is. And in some cases, we need legislation to catch up, like it gas started to for things like loot boxes / gambling in kid's games.


>Would you rather see a ton of ads that are completely irrelevant to you, or see ads that you actually need?

You don't have to decide between targeted personalized ads and their toxic rats tail, or no ads at all. There's a middle way called contextual ads which is bound to the topic you're consuming.


That would be a harder choice if ads were better targeted to my goals, but ads based on tracking just aren't that helpful in my experience. They may work to get people to buy things, but that's a separate axis. Ads are not for the consumer's benefit (any benefit for them is incidental). They are designed to benefit the advertiser.

It's not just a choice between relevant and irrelevant ads, it's that plus many more factors. Even if ads were super accurate, I'd prefer irrelevant ads to having my personal habits finely catalogued in order to subconsciously affect my buying or voting choices. And it is subconscious. Anecdotally, I recently noticed I'd started to want a particular type of personal care accessory. I realized the only reason this happened is because I'd been subjected to dozens of ads for this item over several weeks. I have zero need for this item, zero interest in owning one four weeks ago (even though I already knew the product existed), and nobody would even know the difference if I had one or not!


The problem is not that ads are too relevant. It's the tracking itself.

It is possible to do targeted advertising without surreptitious tracking. I listen to podcasts and often buy through their promo links. I'm happy to tell the advertiser where I heard the ad, because I like the content. And it's all explicit - no spying required.


Just as a point of information, that is not entirely true. Podcasts are tracking you too, but the technology is older and can't do it as well as Facebook.

When you listen to an episode of a podcast, an advertiser can get data from the file download, so at the very least they know your location and your phone profile. Say you download a podcast from a certain location Brooklyn, using iPhone X, an advertiser can profile you to be upper-middle-class and show you a Peloton ad, while same podcast download of an old Android phone is going to get an ad for a short-term loan.

The advertisers are hungry for ways to get more data because right now it's still a black box, you throw money in, and maybe someone buys. They would much rather target with precision, than with banner ads.


I'd rather see some irreverent ads. They feel less intrusive them personalized ads. Just a feeling.


> Would you rather see a ton of ads that are completely irrelevant to you, or see ads that you actually need?

I don't actually need any ads, so this is a false choice.


> Tracking that makes your life better is useful. If Opendoor knows you are looking for a new house in Arizona, and they can show you just the right house before it's sold to someone else, isn't that fantastic?

I bought into this reasoning for some years until I realized that I never saw a useful ad (except on Facebook properties, which is ironic since they are the ones who know least about me as far as I can tell.)


> Tracking that makes your life better is useful. If Opendoor knows you are looking for a new house in Arizona, and they can show you just the right house before it's sold to someone else, isn't that fantastic?

Maybe? When I look at a couple houses to help get comps for a family member and I get innudated with real estate listings for somewhere I don't live and don't want to live for months, that's certainly not fantastic. See also car shopping, or really anything where one sale is valuable enough to target me.

Also fun --- when you look at documentation or ratea for a service provider when you're their number 1 customer, and ads for that service provider follow you around for weeks. Yeah, I guess I'm highly likely to buy, cause you put me in your SEC filings as a top customer.


I never saw a personalized ad to an actual good offer or thing that I didn't really knew about and was interested in. Personalized ads usually show me stuff that I've already purchased. Also, it's like someone pressuring to keep buying somethig and telling your brain "remember, you wanted this, you need this".


Can someone explain to me why third party JS has such a big impact on load time? I thought browsers deprioritize third party JS and load it after all first party assests have been downloaded and the site has been rendered. If that's the case why does third party JS still have such a big impact on page performance?


You can (usually accidentally) end up with a render blocking script tag. In this article example it was client-side optimizely.

The parsing and execution of third-party javascript is definitely non-trivial if you profile it, especially on lower end devices.

Finally, browser download priority requires async and defer attributes on scripts (usually), or other clever ways of deferring loading.


> You can (usually accidentally) end up with a render blocking script tag. In this article example it was client-side optimizely.

This makes sense for a tool like Optimizely which has to block rendering.

> The parsing and execution of third-party javascript is definitely non-trivial if you profile it, especially on lower end devices.

Given that this is supposed to happen after the page has already loaded, does this matter? If the thing you are optimizing for is page load time, and it takes a few 100ms after the page has already loaded to run the JS, does that actually negatively impact the user experience?

> Finally, browser download priority requires async and defer attributes on scripts (usually), or other clever ways of deferring loading.

When you add third party JS, shouldn't this be taken care of for you? Scripts for adding third party JS should already make use of async and defer as needed. What's a case where you would need to actually think about async and defer when making use of third party JS?


This type of thing usually ends up being “death by a thousand cuts”. async + defer help out, but they do still incur parsing and evaluation (iirc, before document onload event) overhead. If you delay loading until you’re sure the page is interactice, you’ll end up loading third-party pretty late (which impacts metrics, and usually isn’t out of the box supported).

On a standard product-ey site with retargeting ads, user tracking, etc. this third-party slowdown is significant.

All of this is exacerbated on lower-end devices, and non-WiFi Internet.


One thing important to note is for certain use cases of Optimizely like A/B testing, it's actually desirable to block rendering until it's initialized as otherwise you end up with a flash of the default variation that then gets switched to the appropriate one afterwards.

Though after working with a few of these A/B testing vendors, I'm firmly convinced that you'd be better off long term implementing your own A/B testing service on top of some internal admin UI builder like Retool.


I don’t know how standard this is but to prevent our third party JavaScript from impacting page load times, we defer our tag manager (Tealium) from being loaded until after the page has successfully loaded.


*damned


And the manager peeked in, not peaked in.


Thanks! Updated on both fronts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: