Hacker News new | past | comments | ask | show | jobs | submit login
Atlassian Cloud ToS section 3.3(I) prohibits discussing performance issues (atlassian.com)
419 points by dmitriid on Jan 2, 2021 | hide | past | favorite | 293 comments



I'm curious, I'm a Cloud customer and I can tell you that the service is incredibly slow even for a small scale setup (2 Jira Projects and 3 Confluence Workspaces). There's an insane amount of network requests seemingly for mouse tracking. By telling everyone here, that Atlassian Cloud products are insufferably slow, am I violating the ToS?

I was actually thinking about doing a write up on the issues I've had but this seems to make me think I should do so AFTER I find someplace else to go. Right now GitHub is the likely destination but would love to hear other suggestions.


JetBrains recently announced Space and it looks pretty cool (and fast): https://www.jetbrains.com/space/


Currently cloud-only but an on-premises version has been announced. So while Atlassian goes cloud-only, Jetbrains is going in the other direction.


I'm a heavy JetBrains user but I've found their server side offerings to be lacking. I haven't been a fan of YouTrack in the past but maybe it's time to give them another try.


Looks interesting, like a JetBrains Azure DevOps/VSTS.


Yes, they are going for a “all in one” including project and team org functions, source control etc. I’ve been using it for pet projects since the beta.


"looks cool and fast" - DMC DeLorean ?


Sorry to hear it's been a frustrating experience. I'm a PM for Confluence Cloud and we're always trying to make it better. Would you be willing to share more specifics, such as: - Pages with content X are the slowest - Trying to do A/B/C is annoyingly slow - etc ?

(edit: looks like HN is severely limiting my reply rate so apologies for delays)

We're trying to focus on frustrating pages/experiences rather than number of network calls and such, because while the latter is correlated, the former (frustrating slow experiences) is really the bar and the higher priority.

In terms of the ToS I'm not from legal so can't say (still looking into it), but have definitely had conversations with users on public forums about performance issues, and afaik no one has been accused of violating their ToS.

(edit: since I can't reply due to HN limits I'll try to add some stuff in this edit)

------- @plorkyeran "target things that are easier to fix than those with highest impact" -> this is a good point and something we're trying to do. Engineers know the former (easier to fix) pretty readily, but identifying "highest impact" requires some work, so I'm (as a PM) always trying to find out. It's of course some combination of these two (low hanging fruit, high impact items) that forms the priority list.

------ @igetspam (moved followup into a reply to trigger notification)

------@core-questions "perf to take over company for 6mo-1yr" I'm not in the position to make that level of decisions, but can certainly pass the feedback up the chain. The perf team is trying their best though, so any info anyone can provide us can help us apply our resources in the right place


I did a test for you just now. I have 100Mbps internet, 32GB RAM, 4ghz i7 processor and suchlike. To make it easy for Jira, I'm doing this at a weekend, late at night, during the new years holiday so the servers shouldn't be busy.

On a cloud-based classic software project (which has less than 200 issues) opening a link to an issue it takes 4.8 seconds for the page to complete rendering and the progress bar at the top of the screen to disappear.

Opening a Kanban board with 11 issues displayed? 4.2 seconds for the page to load.

Click an issue on the board? 2.5 seconds for the details to pop up.

Close that task details modal - literally just closing a window? 200 milliseconds. Not to load a page - just to close a modal!

In case I'm being hard on cloud Jira by insisting on using a classic project, I also checked with a 'Next-gen software project' with less than 2000 issues.

I click a link to view a particular comment on an issue. 4.8 seconds until the issue, comment and buttons have all loaded.

I choose to view a board? 9.9 seconds from entering the URL to the page load completing.

I'm viewing the board and I want to view a single issue's page. I click the issue and the details modal pops up - and just as I click on the link to the details, the link moves because the epic details have loaded, and been put to the left of the link I was going for, causing me to click the wrong thing. So this slow loading is a nontrivial usability problem.

View a single issue, then click the projects dropdown menu. The time, to display a drop-down menu with three items? 200 milliseconds.

This is what people mean when they say the performance problems are everywhere - viewing issues, viewing boards, viewing comments, opening dropdowns, closing modals? It's all slow.

And if you imagine a backlog grooming meeting that involves a lot of switching back and forth between pages and updating tickets? You get to wait through a great many of these several-second pageloads.


See, the irony of this is that you are just publicly sharing performance numbers which undeniably show a pattern of performance issues. It also doesn't seem to be possible without you first accepting ToS.

Ooops!


What are they doing to do? Shut down your instance and force you to switch to a different product....? Hmmmm


Certainly there are providers who would immediately begin license renegotiation with the thread of termination. It's bad business in the modern era because somebody will just tweet out the renegotiation terms and the licensors don't want to be streisanded.


> Certainly there are providers who would immediately begin license renegotiation with the thread of termination

Oracle comes to mind.


who says this is an "issue" its just numbers. If you think its an issue thats your interpretation. For instance I used jira for communicate with my team about 3 projects and it only took me 3 hours.

Maybe this person is writing a fiction story where the protagonist is using Jira and they are detailing how they spend their day.

Its like a John Steinbeck novel


> who says this is an "issue" its just numbers. If you think its an issue thats your interpretation.

No, this is a quote from the comment:

"This is what people mean when they say the performance problems are everywhere - viewing issues, viewing boards, viewing comments, opening dropdowns, closing modals? It's all slow."


May I point you to the title of this submission?

"Atlassian Cloud ToS section 3.3(I) prohibits discussing performance issues"

No sane judge would agree with your interpretation.


The actual text says that you can't "publicly disseminate information regarding the performance of the Cloud Products". So no interpretation required; posting the stats is enough.


No sane judge would accept that that is a valid clause in a ToS.


Are you allowed to say “use another app”? Or no?


Hi michaelt,

Thank you for the numbers -> I agree these are slow, and I can guarantee you that the Jira team is working on it (though I can't talk about details). These numbers are definitely outside of the goals.

I appreciate the call out of "page to complete rendering and the progress bar at the top of the screen to disappear" and "until the issue, comment and buttons have all loaded". In a dream world of course, everything would load in < 1s (everything drawn, everything interactive), but working our way down to that will take time.

We're currently looking at each use case to understand the '(a) paint faster vs (b) interactive faster' tradeoff and trying to decide which cases the user has a better experience with (a) or (b). In Confluence this is clearer in some places than in others, but in Jira it's less clear I think (I work on Confluence, I probably shouldn't speak for Jira specifics).

It always comes down to a limitation of resources though, which is why we're always hoping to get as specific feedback as possible.


> In a dream world of course, everything would load in < 1s

It's important you understand that "everything loading in <1s" would still be unacceptably slow - that is still an order of magnitude too slow.

That is not "a dream world" - not even close. A well built tool like this, meeting standard expectations (i.e. table stakes), would hit <50ms for the end user - the vast majority of the time. A "dream world" would be more like 10ms.

You should be targeting <200ms for 99% of user-facing interactions. That is the baseline standard/minimum expected.

This is why people are saying the company needs to make a major shift on this - you're not just out of the ballpark of table stakes here, you're barely in the same county!

It cannot be overstated how far off the mark you are here. There's a fundamental missetting of expectations and understanding of what is acceptable.


Do you have evidence that what you're asking for is possible? I'd be interested to see websites that hit the benchmark that you're aiming for.

I just tested a HN profile page (famously one of the lightest weight non-static websites) and it takes between 300ms and 600ms to load. I'm not saying that Jira can't improve, but if HN isn't hitting 250ms then I think telling the Jira guys that nothing less than <200ms is the minimum standard is unrealistic.


Look at github pull requests. It loads in under 200ms for me. And is vastly more complex than HN, both in sense of queries and UI, content should be equivalent of what Jira needs.

Jira is also much more interactive than HN. You are sitting 10+ people in a room with some half asleep scrum master opening the wrong issue, have to go back and open the correct one, search again for some related issue you though was fixed last month. Refresh the board to make sure you didnt forget to fill in one field so it ends up in the wrong column, etc etc.

1 sec per click in a situation like this is a joke, and that's just their goal. Reality is 4sec+ as OP mentioned, often even more.


200ms for user interactions is different to a 200ms page load.

A 200ms page load is incredibly fast.

Still, I tested your profile page on Google PageSpeed and it came out at a 300ms load time.

https://developers.google.com/speed/pagespeed/insights/?url=...


Assuming the FE resources are already cached on the user's machine, with careful optimisation, doing all of the rendering/fetching on the FE over a single connection, and with everything parallelised, it definitely is possible to load a new page well under 100ms with the key content being displayed.

When taking that kind of approach, you don't have to wait for the slowest thing to come in - eg with a normal BE render, you might need to pull up the user's profile and settings, A/B testing flags, the current footer config or whatever.

eg if you're on the page for viewing a single ticket, you can request the ticket data immediately, and render it as soon as it's available - even if other parts of the page aren't finished yet. True it may be more like 200-300ms to have the entire thing be 100% complete, but all parts of the page are not of equal importance and holding up the main content while loading the rest isn't necessary.

If you are doing a full BE render, it's still totally possible to hit that 100ms mark, but indeed dramatically more difficult.


Hi BillinghamJ,

You're right, I apologize for not being clear. We're targeting 1s for "Initial loads" on new tabs/new navigation, which I assume you're referring to. Our target for 'transitions' is different.

If however the numbers you're referring to are "initial load" numbers, then I'm not sure.

(edit: and action responses again are also a separate category. Our largest number of complaints are about 'page load times' in Confluence, so most conversations center around that)


Initial loads should definitely be be <100ms as well.

But Jira currently is so slow that 1s would be a great improvement. I am using it at work and regret it, unfortunately.


As a first step, 1s would be better than nothing for sure, but you need to be working towards a much tighter goal on a 1-2 year timeframe.

New load, you should really be hitting 200ms as your 95th percentile - 300ms or so would be decent still. "Transitions" should hit 100ms 95th, 150ms would be decent.

If you did hit 100ms across the board, you'd be rewarded by your customers/users psychologically considering the interactions as being effectively instantaneous. So it really is worth setting a super high bar as your target here (esp given you need a bit of breathing room for future changes too).


Thank you for coming back and clarifying. Do you happen to have links to any public testing results of other tools, or guidance to this specificity - would love to use them to build a case internally

Most of what we've seen online are nowhere near this level of detail (X-ms for Y-%ile for Z-type of load)

(edit: clarified request)


I'm afraid I'm no expert on project management tools!

On what users experience as effectively "instantaneous", that's from experience on UX engineering and industry standards - https://www.nngroup.com/articles/response-times-3-important-...

On the other noted times, they're just a general range of what can be expected from a reasonably well-built tool of this nature. Obviously much simpler systems should be drastically faster, but project management tools do tend to be processing quite a bit of data and so do involve _some_ amount of inherent "weight", but that isn't an excuse for very poor perf.

That said, I imagine if your PMs do some research and go ahead and try using some of the common project management tools, you should get a good idea. ;) Keep in mind speeds to Australia (assuming Atlassian is operated mostly there?) will likely show them in a much worse light than typical perf experienced in the US/UK/EU areas.

The time to first load is derived from the fact that you're running essentially the equivalent of many "transition" type interactions, but they should be run almost entirely in parallel, so roughly 2x between "transition" and "new load" is a reasonable allowance.


Thanks for the link! Yes this is the general guidance we're using too (0.1/1/10s), and one that we're reinforcing at every level of the company. This link does have more detail than I've seen in other places though, so it's an interesting read.

However I've not seen guidance on whether these should be P90 or P95 or P99 measures for example though. We've selected something internally, but obviously selecting amongst three 'measurement points' could drastically change general user's experience.

(HN is throttling my replies so apologies for delay)


The percentiles are a bit of a combination.

A big part is simply how far you are in your journey of getting good at performance - if your p50 is still garbage, there's not much point in focussing on your p99 measurements. You should be targeting the p99 long term, but focus on the p50/p90 for now.

It's super important to target and make long term decisions around the p99 though, because, e.g., making a 100x improvement is not possible through little iterative changes over 2-3 years. You need a base to work from where that 100x is fundamentally achievable, which requires thinking from first principles and slightly getting out of the typical product mindset.

I also find the typical product mindset tends to result in focussing a lot on the "this quarter/next quarter" goals, but neglecting the "8/12 quarters from now" as a result.

Beyond short term/long term goals, the choice is largely just down to what the product is/does. Even ignoring all current architectural choices, there are some fundamentals where certain things must always be faster/slower - e.g. sync writes will typically be a fair bit slower than reads, and typically occur much less often, complex dynamic queries which can't be pre-optimised require DB scanning but are much less common.

For these kinds of tools, where most of the interaction is reads, mostly on predefined or predefined + small extra filtering, and reading/writing on individual resources (ie tickets), you can get p99 numbers trending towards the 100ms mark eventually - there's very little which truly can't get to that level with clever enough engineering.

---

Of course I imagine Google tends to be looking more at their p99.9/p99.99/pmax/etc(!), at least for their absolute highest volume systems.

None of us are going to be getting to that point, but it's often worth thinking about engineering principles against a super high bar - it often helps people to open their minds a bit more and think more outside the box when given a really dramatic goal magnitudes beyond their existing mindset.

Of course you're not expecting to really get to that level, but anchoring that way can achieve amazing things. I've done that with a lot of success at my company and we actually did manage to achieve a few originally thought to be totally unrealistic.


Not to be a jerk, but you guys don’t allow others to take your performance metrics, but you’re publicly soliciting performance data from other products at the same time? I’m assuming you’re taking it for granted they don’t have a ToS that bans you from doing this.

Sorry if that’s pointed, but it’s sort of meant to be incredulous (but hopefully not offensive).


Not offended - as an employee I have no specific insight into the actual headline term in the ToS - honestly I'm planning on tracking down someone in legal to help clarify this, since it seems like it currently as written (and currently as interpreted in worst case) unecessarily impedes me from doing my job.

I would never encourage anyone to violate a ToS of another product and apologize for anyone that was considering doing it due to my ask.

I think these are other possibilities: 1) (as stated) other products don't have such ToS 2) other products may have published their own metrics and made them available for consumption 3) from a more legal in depth standpoint, maybe other companies have such ToS terms but have clarified them to some point that makes them more clear about when they apply and when they don't


Sorry you have to work on this thread on vacation dude (or girl). This thread has been an absolute beat down and you’ve treated it with utmost professionalism when it’s pretty clear you’re new to the team. It is Saturday night after all.


I appreciate the well wishes, honestly it means a lot - good guess too, I am in the US (maybe you knew, but most of Confluence Cloud is based out of the West Coast offices).

I don't know what the overlap is between Atlassian users and IT admins though - my previous job was on the vSphere UI and if you happen to know about the death of the Flash based client, this is not too far off.

Hopefully users stay willing to engage with us so we can improve the product as fast as possible.


Do you guys use synthetic monitoring tools?


I don't think I know what that might exactly mean - I know we have synthetic traffic generation tools (and thus, measurements generated from the synthetic traffic), but I think those exhibit the same variance as production -> the backend for them are the same cloud IaaS systems and SW, so there's no 'sandboxed from all outside variance'.

If it means something else then I'm not aware if we do it or not.


no dont give into this guy ... this is done over the net. The rate of transfer has to be taken into account. Unacceptable is a measure of comparison.

Unacceptable to who, you have a faster provider for cheaper, with as many features???

Im pretty sure he doesnt because if he could he would go there. There are tradeoffs and Atlassian has many project they are working on. They understand that there is room for improvement in performance. Its one of Atlassian's priorities, it is a tech company (a pretty good one I would say).

I guess one question is about server redundancy. Where is this guy loading from and where is the server he is loading from? Getting things below 1s is nearing the speed of the connection itself. Also at that speed there is deminishing returns. Something that happens at 1s vs .5s doesnt make you twice as fast when you dont even have the response time to move your mouse and click on the next item in .5s.

Sometimes techies just love to argue. You are doing great Atlassian and have tons of features. But maybe it is time to revisit and refactor some of your older tools.


You've shown poor understanding here.

> Getting things below 1s is nearing the speed of the connection itself

That is absolutely false. Internet latency is actually very low - even e.g. Paris to NZ is only about 270ms RTT, and you _do not_ need multiple full round trips to the application server for an encrypted connection - on the modern internet, connections are held open, and initial TLS termination is done at local PoPs.

For services like this - as they are sharded with customer tenancy - are usually located at least in the same vague area as the customer (e.g. within North America, Western Europe, APAC etc).

For most users of things like Atlassian products, that typically results in a base networking latency of <30ms, often even <10ms in good conditions.

Really well engineered products can even operate in multiple regions at once - offering that sort of latency globally.

> Im pretty sure he doesnt because if he could he would go there

Yeah, we don't use any Atlassian products - partly for this reason. We use many Atlassian-comparable tools which have the featureset we want and which are drastically faster.

> when you dont even have the response time to move your mouse and click on the next item in .5s.

There is clear documented understanding of how UX is affected by with various levels of latency - https://www.nngroup.com/articles/response-times-3-important-...

> Sometimes techies just love to argue

Not really, I have no particular investment in this - I don't use any Atlassian product, nor do I plan to even if they make massive perf improvements.

But I do have an objective grasp - for tools like this - of what's possible, what good looks like, and what user expectations look like.

> no dont give into this guy

I don't expect Atlassian is going to make any major decisions entirely based on my feedback here, but it is useful data/input for exploration, and I do feel it's right to point out that they're looking in the wrong ballpark when it comes to the scale of improvement needed.


To put things in perspective, the typical Jira 5-second page load time as reported by many people in this forum is equivalent to twice the round-trip time for light to the Moon!

It's the network latency equivalent of a million kilometres of fibre!


The internet is fast. Computers are fast. One second is enough time for my machine to download 10M data points and render them into an interactive plot.

https://leeoniya.github.io/uPlot/bench/uPlot-10M.html

In my mind, anyone doing UI development and seeing user interactions taking over 1 second should be asking themselves "did the user just try to operate on more than 10^6 of something?" and if the answer is no, start operating under the assumption that they've made a mistake.


> A "dream world" would be more like 10ms.

A gateway solely adds 50 ms. So I'm not really sure where you get your numbers/benchmarks from... They are unrealistic


What gateways have you been using?! That's a long, long way off on the modern internet. Assuming you mean gateways as in the lines you'd see on a traceroute, more typical might be ~2-5ms on a home router, ~0.5-1.0ms upstream.


Lol, wasn't expecting that :p

Ocelot would be a better example of an gateway https://github.com/ThreeMammals/Ocelot

Used for scaling up web traffic or creating bff's ( backends for frontends)


Ah nice, I didn't realise you meant application proxies/gateways. Network ones are so quick due to their ASICs etc!

I personally would still say 50ms is super, super slow for an application gateway - a well designed one using e.g. nginx/openresty, lambda@edge, or simply writing another application server etc can easily do that job with an addition of <0.1ms processing time (assuming no additional network calls or heavy work), and maybe 0.3ms for additional connection establishment if it hasn't been optimised to use persistent connections.

If it is e.g. making a DB request to check auth, I would highlight that this _is_ backend processing time, not inherent or unoptimisable overhead. e.g. it's totally feasible to do auth checks without making any async calls, just need a bit of crypto and to allocate some memory for tracking revoked tokens - does add a bit of complexity, but likely worth it for the super hot path.

BFFs would not really need to add anything beyond ~1ms or so, but you do hit the lowest common denominator - in that you have to wait for the slowest thing to complete, even if everything is happening in parallel.

BFFs definitely benefit in simplifying client-side code, but at the downside of increased overall latency and potentially resilience which could be achieved by decoupling unrelated components.

As such, I wouldn't expect the Atlassian products to use BFF patterns - for them it's better to throw 1k requests down a single HTTP 2/3 connection and render each part of the page when it's available. I have heard their FEs are very complex, which I think would probably support that assessment.


Ocelot - localhost => 40-50 ms on a workstation.

Gateways can add a lot of functionality. Even Graphql can be used as a gateway.

It's not all "dumb forwarding" and I would be very surprised that you find any sub ms benchmarks.

Amazon has a one million dollar award if you get the page to load under 10 ms. So that's what you are expecting by default on a saas in your previous comment.

It's still unrealistic.


That just says that Ocelot consumes quite a bit of your latency budget. Maybe the features it brings are worth it to you, but it's def not anywhere close to the limit of what's achievable.

e.g. Envoy (which replaced Ocelot in Microsoft's .net microservice reference architecture) has a significantly lower latency cost (1)

Your reference to an Amazon reward is interesting as it's quite easy to get pages to load under 10ms in the right conditions. Perhaps you can provide a link to more information?

1. https://docs.microsoft.com/en-us/dotnet/architecture/microse...


It's from someone who worked at the parent company i work for ( Montreal ) and had that reference from someone working at Amazon


At the end of the day, all that middleware type stuff is part of your backend - it is not inherent overhead.

If you want to really focus on performance, you can choose not to use anything like that off the shelf and do it all in a fraction of a millisecond. It actually isn't difficult - you just need to not get stuck into a dependency on something heavy.

For my company's backend, our entire middleware stack incl auth checks is around the 1-2ms level including hitting a DB server to check for token revocation. That's all there is between the end user and our application code, plus network latency. We didn't do anything particularly clever or special. But we didn't use any frameworks or heavy magic products - just Go's net/http, the chi router and Lambda@Edge.


I hate to pile on a thread where you're already taking a lot of flack, but this point is really important to the future of Atlassian:

> In a dream world of course, everything would load in < 1s (everything drawn, everything interactive),

As a contractor, I have more or less walked out of or refused interviews on discovering Atlassian toolset was in use. It's not because I hate your tooling (it is visually nice and very featureful), it's because the culture that delivered this software is antithetical to anything I look for in a software project I want to use or contribute to. How can I possibly do my job to any degree of satisfaction when I'm tracking work in a tool that requires 15 seconds between mouse clicks? That is the reality of Jira, and as a result I refuse to use it, or work for people who find that acceptable, because it's a "broken window" that tells me much more about the target environment than merely something about suboptimal bug trackers.

Your page budget should be 100ms max, given all your tools actually do are track a couple of text fields in a pleasing style. Whoever the architecture astronauts are at Atlassian that created the current mess, flush them out, no seat is too senior -- this is an existential issue for your business.


Hmmm. I mean. I'm a contractor too, and I share your pain, but ... I'm really impressed you walk out of paid work because of the issue tracker your client uses.. It sounds a bit like they dodged a bigger bullet than you did tbh mate.

All these systems suck. You learn to live with them, for me I do this:

Everything goes in OmniFocus, I have a keyboard shortcut to create a task that takes <1sec, hit enter twice and it's stored. Twice a day I go though all the tasks I entered this way, and I either mark them done or assign them to various projects/tags/labels I have setup on OmniFocus.

15 mins before I finish work for the day at a client, I update whatever ticket system they use (mostly Jira, but also sometimes even worse things like servicenow) and also whatever enterprise crapware my agency uses (usually some sap based bollocks).

The last 15 mins suck. But it's part of the deal. I can't imagine how strongly you feel to turn down contractor rates due to a ticket system.. I mean, come on?

Edit: Also - btw -- if you're on a Mac the app-store 'fat-app' version of Jira is about 10x better than using the web interface, I suggest you give it a try.


If you turned up to interview, or even worse, arrive at a client site, and they hand you a mouldy 80386 to work from, and point you to the basement, would you feel comfortable?

Jira is the mouldy 80386, and the client's culture is that basement where such things belong. I can't see how this is even being precious. I can find solid work on good teams with smart people anywhere, there is no reason I need to work in a basement permanently damaging my lungs.

Lame analogy, I know, but it's close enough.


> I'm really impressed you walk out of paid work because of the issue tracker your client uses.. It sounds a bit like they dodged a bigger bullet than you did tbh mate.

> I can't imagine how strongly you feel to turn down contractor rates due to a ticket system.. I mean, come on?

It may be the case that they are in such high demand they have practically free choice of work. That's how I interpreted it, at least.


You’re tooling seems...impressive? Assuming you had 2 projects that paid the same why in the WORLD would you eat 1 to 1.5 hours of that a week? Seems soul crushing and demotivating, but, props to you for not being a fair weather sailor and just getting it done. Actually kinda cool how resourceful your solution is.


> Your page budget should be 100ms max, given all your tools actually do are track a couple of text fields in a pleasing style

Yeah although it doesn't exactly help in figuring out how to resolve, I think this can be a good grounding in what the product fundamentals actually are and figuring out which over-engineering of those fundamentals is translating into speed problems

I often feel that product people view this type of problem in the wrong way - when you're starting at 5-10s, little incremental A/B tested tweaks are not going to get you down to 50-100ms. A 100x diff requires you to rethink from first principles - it's impossible to get there otherwise

Of course this is also why incumbents get disrupted by startups!


Hopefully this demonstrates that the anti-performance-discussion ToS clause is harmful not only to your customers but to you as well. You're getting useful information here only because some people are willing to openly violate it.


Not to mention the reputational damage from people asking "why the hell is this in the contract in the first place?"

It says they're so afraid of the quality of their product they'd rather litigate their customers than fix their product.


I wonder if these should be called "Streisand clauses", because it seems that the net effect will be for people to increasingly associate Atlassian with badly performing software.

Certainly if someone asked me what I know about Atlassian, this would now be one of the first things that come to mind.


At the margin some one reading this thread is much more likely to hop of Atlassian and short the stock than than they are to become a new user with one of those shiny high net promoter scores and “land and expand” wallet shares they brag about in their investor relations materials.


> In a dream world of course, everything would load in < 1s (everything drawn, everything interactive), but working our way down to that will take time.

FWIW our on-prem install uses <1s for opening issues, running a search etc. Too bad that's a dream world you've decided should no longer be...


I’m relatively sure it’s the lack of several host to host hops in the network requests that makes an on-prem install so much faster. The way Atlassian’s hosted services handle requests is mind-bogglingly awful and necessitates several round trips per request a lot of the time. It’s just poor architecture on their end.

We’re talking 3-5 redirects for some things they could just proxy on their backend. It’s dumb and there’s no amount of hardware or bandwidth a client can throw at the problem to fix it.


This should be quite glaring in any performance metrics collected though, shouldn't it?

I mean, I don't do web stuff (yet) but I can't imagine it's that difficult to figure out where several seconds get spent.


It’s possible that Atlassian work culture requires getting permission to grant permission to a subordinate to grant permission to their subordinate to do some work, contingent on a report of the quantifiable metrics that will be reported periodically to be compiled into other periodical reports that no one will care enough to read.

I’m only sort of joking here, since it can be weirdly difficult to actually just do the job you’re being paid to do in some orgs. I once got paid handsomely to deliver almost nothing for six months since all the layers above me were busy either talking back and forth or not even caring at all. It bored me so much that I had to leave, but the money was great.

There weren’t even any disappointed customers because they just allocated budget and forgot about the project. It wasn’t their own money they were spending, after all.

Being an engineer is weird sometimes.


"'(a) paint faster vs (b) interactive faster'"

It is only a tradeoff if you're at the Pareto optimality frontier [1] for those two things.

I seriously doubt that you are. You should absolutely be able to have more of both.

I would recommend to you personally two things: Open the debugger, and load a page with an issue on it in any environment. Look at the timeline of incoming resources, not just for how long the total takes but also all the other times. You will learn a lot if you haven't done this yet. It will be much more informative than anything we can tell you.

Second, once an issue is loaded, right click on almost anything in the page (description, title, whatever) and select "Inspect Element". Look at how many layers deep you are in the HTML.

I also find it useful to Save the Web Page (Complete) once it's all done rendering, then load it from disk with the network tab loaded in the debugger. It can give a quick & dirty read on how much time it takes just to render the page, separate from all network and server-side code issues.

I have a bit of a pet theory that a lot of modern slowdown on the web is simply how much of the web is literally dozens and dozens of DOM layers deep in containers that are all technically resizeable (even though it is always going to have one element in it, or could be fixed in some other simple way), so the browser layout engine is stressed to the limit because of all the nested O(n) & O(n log n) stuff going on. (It must not be a true O(n^2) because our pages would never load at all, but even the well-optimized browser engines can just be drowned in nodes.) I don't have enough front-end experience to be sure, but both times I took a static snapshot of a page for some local UI I had access to that was straight-up 2+ seconds to render from disk, I was able to go in and just start slicing away at the tags to get a page that was virtually identical (not quite, but close enough) that rendered in a small fraction of a second, just with HTML changes.

My guess is that fixing the network issues will be a nightmare, because the 5 Whys analysis probably lands you at Conway's Law around #4 or #5. But, assuming you also have a client-side rendering issue (I don't use JIRA Cloud (yet) but I can vouch that the server product does), you may be able to get some traction just by peeling away an engineer to take a snapshot of the page and see what it takes to produce a page that looks (nearly) identical but renders more quickly. That will not itself be a "solution" but it'll probably provide a lot of insight.

[1]: https://news.ycombinator.com/item?id=22889975


I guess I meant that in a more general sense: prioritization is always about tradeoffs, and sometimes you're improving one (paint faster) or improving the other (interactive faster), sometimes both, sometimes trading off one versus the other.

We have looked into the network issues and some of it is similar to what you stated, we do have a known minimum given our chosen cloud infrastructure (separate from our software performance) - we obviously recognize we're not at that limit yet either though.

I have not tried what you mentioned above (load from disk), but I will give it a shot -> it may also give us a clue on how to make our performance testing lower variance, come to think of it......

(apologies for slow replies, HN is still throttling me due to downvotes)


That tallies with the experience I had using Jira cloud a few years ago. It sounds like it's still a great case study in how not to architect an issue tracker.


Literally everything? I don't think I could give an example of something which isn't frustratingly slow in Jira. It doesn't need targeted fixes to specific things; if I successfully made a list of the ten biggest offenders and they were all magically fixed tomorrow I don't think it'd appreciably change the experience of using Jira because the next 90 would still be awful.

When faced with long-tail performance problems, it's often better to target the things which are easier to fix rather than the highest impact fixes. Making 20 relatively low-impact things faster can easily be better than improving 10 individually high impact things.


It seriously makes you wonder whether they even use it internally, because not acknowledging or fixing those issues while pretending you have a fast system doesn't make sense.


They use the on-premises version, which is much faster: https://jira.atlassian.com/secure/Dashboard.jspa


If that's true, the fact that they aren't dogfooding their own product makes me 100% confident they will fail.

I'm actually going to look into shorting Atlassian now.


What the guy above said is not true. Jira Cloud uses Jira Cloud to manage their projects.

https://www.youtube.com/watch?v=hY91A_4Mbts


Both right

a) the public facing jira.atlassian.com is Server/DC instance - but this instance is only used for customer/outside world facing tickets (I think)

b) internally for our own development we use a few (several?) cloud instances -> but I can only speak to those I interact with (Conf Cloud and Jira Cloud primarily).


I used to work on Jira and it frustrates me seeing people say we don't dogfood our products when internally we do everything on a staging instance. jira.atlassian.com is not something developers use, and is a public facing instance.


Yep, and from a demo of YouTrack (from JetBrains), I got the opposite impression: it’s streamlined just the way a developer would want, keyboard shortcuts and all.


Youtrack still somtimes comes up some very weird shortcuts :) https://youtrack.jetbrains.com/issue/JT-19706


> Trying to do A/B/C is annoyingly slow - etc ? [...] We're trying to focus on frustrating pages/experiences rather than number of network calls

It's not really a problem with a certain page or a certain action: it's a systemic issue, that can only be solved with a systemic change.

This has come up before here in HN [1]. From my point of view, ignoring the issue around number of calls/performance and all feedback regarding it is the root cause for the slowness.

[1] https://news.ycombinator.com/item?id=24818907


Hi ratww,

Thank you for reiterating this point, and I'll try to shed some light on this. We actually are working on systemic changes to try to make this lighter/better but I can't talk about specifics until the feature is available.

On the other hand, any level of specificity is great, for example: 1) full page loads are slower and more annoying than Transitions (or, vice versa) 2) loading Home page is slower and more annoying than Search Results (or, vice versa) 3) waiting for the editor to load is more annoying than X/Y/Z 4) etc....

Even systemic changes require individual work for applying to these different views, so any level of specific feedback would be helpful.

(also it looks like HN is limiting my reply rate so apologies for any slowness)


Dear Esteemed Colleague at Atlassian,

I also use Confluence and JIRA regularly, and can confirm that they are the slowest most terrible software that I use on a regular basis. Every single page load and transition is slow and terrible.

Asking "which one is the highest priority" is like asking which body part I'd least prefer you amputate. The answer is: please don't amputate any of them.

It's as if I asked you to dig out a hole for pouring foundation for a house. The answer to "which shovelfull of dirt has the highest priority" is all of them. Just start shoveling. It's not done until you've dug the entire hole.

It's like the exterminator asking which specific cockroach is bothering me the most. (It's Andy. Andy the cockroach is the most annoying one, so please deal with her first).

What I, and many many other commenters, are trying to tell you is that the entire product is slow and terrible (not your fault. I'm guessing you're new and just trying to improve things, and I hope you succeed!). If it were a building, I'd call it a teardown. If it were a car, I'd call it totaled.

It doesn't matter what page or interaction you start with. Just start shoveling.


Hi lostdog,

Thanks for the understanding! Indeed I haven't been at Atlassian that long, but that's not a good excuse: it's my problem to own.

I appreciate the reinforcement of "fix everything", and I assure you we're trying our best to do so. As a PM it is my natural instinct (and literal job) to prioritize, so I'm always looking for more details to do so.

I can understand that my request for details can imply that I'm either not listening or not believing the feedback, but that is not the case -> I do understand everything is slow and needs fixing.


This is a throwaway since I use Jira/Confluence at work and am not authorized to officially speak on their behalf.

We are actively looking for other solutions outside of Atlassian, specifically because the demands to switch to your cloud offerings. We simply do not trust your cloud.

We also have a higher compliance requirement, since we can have potential snippets of production data. Our Jira/Confluence systems are highly isolated inside a high compliance solution. We can verify and prove that these machines do not leak.

The Atlassian cloud is completely unacceptable in every way possible. And going from $1200ish year to $20000 per year with data center is laughably horrendous - for the same exact features.

Unless Atlassian changes its direction, your software is that of the walking dead. We have a absolute hard timelimit of 2024, but in reality, 2022. We'd like to still use it and pay you appropriately, but we're not about to compromise our data security handling procedures so you can funnel more people into a cloud service... And judging by the comments here, is pretty damn terrible.


Same, government contractor can't use cloud Confluence, and the performance is so much worse, why would you? On-prem is so snappy it's comparable to using Word. I evaluated cloud for my previous company in 2018 and performance was the dealbreaker.


If you want to make performance a feature, you need to (in order!)

* define a metric

* measure it automatically with every commit

* define a success threshold

* make changes to get yourself under the threshold

* prohibit further changes which bring you above the threshold

Just do it like that for pretty much every view in the system.


As recommended by another poster to take advantage of the technical community here, I have one question and one comment if you can provide more insights:

question a) My understanding is that performance numbers fluctuate a LOT, even at sampling in the tens of thousands. Do you have any recommendations of tools or methods to reduce this variance?

comment b) we're definitely trying to do this but we're not there yet - most of our metrics don't meet goals we set. Instead the blocking goals must be 'don't make it any worse', which is doable -> but it doesn't necessarily make anything better yet (thus all the questions about what is most annoying that we can fix first).

Hopefully point (b) is clear - I'm not saying "our performance is great/good/acceptable", just the best I can do (as a PM) is try to figure out what to prioritize to fix.


The high variance is another problem. Good software has low variance in performance. Especially if you're sampling in the tens of thousands.

The high variance does give you two tactical problems. First, how do you keep performance from getting worse? Typically you would set a threshold on the metrics, and prevent checking in code that breaks the threshold. With high variance you clearly cannot do this. Instead, make the barrier soft. If the performance tests break the threshold, then you need to get signoff from a manager or senior engineer. This way, you can continue to make coding progress while adding just enough friction that people are careful about making performance worse.

The second problem of high variance is showing that you're making progress. However, for you, this isn't a real problem. You're not talking about cutting 500 microseconds off a 16 millisecond frame render. You need to cut 5-25 second page loads down by a factor of 10 at least. There must be dozens of dead obvious problems taking up seconds of run time. Is Confluence's performance so atrocious that you couldn't statistically measure cutting the page load time in half?


"High variance as a consequence of poor software" is an interesting point and not one I'd considered -> I will take this to engineering and see if we can do anything about that (some components maybe, but we see high network variances too which seem unlikely to be fixable).

Showing that we're making progress isn't as much of a problem - similar to what you stated, the fixes themselves target large enough value that it's measurable at volume for sure, and even in testing.

The main issue is "degradations" -> catching any check-ins that can degrade performance. These are usually small individually (lets say, low double digit MS) within the variance noise), but add up over time, and by the time the degradation is really measurable, its complicated tracking down the root cause. Hopefully I described that in a way that makes sense?

Any suggestions welcome.

(Edit: downvoted too much and replies are throttled again) ----@lostdog Thanks for the detail! will definitely take this to eng team for process discussion.


I work in an area where high variance is very expected and unavoidable. Here's what we do:

In your PR, you link to the tool showing the performance diff of your PR. The tool shows the absolute and relative differences of performance from the base version of code. It also tracks the variance of each metric over time, so it can kind of guess which metrics have degraded, though this doesn't work consistently. The tool tries to highlight the likely degraded metrics so the engineer can better understand what went wrong.

If the metrics are better, great! Merge it! If they are worse, the key is to discuss them (quickly in Slack), and decide if they are just from the variance, a necessary performance degradation, or a problem in the code. Typically it's straightforward: the decreased metrics either are unrelated to the change or they are worth looking into.

The key here is not to make the system too rigid. Good code changes cannot be slowed down. Performance issues need to be caught. The approvers need to be fast, and to mostly trust the engineers to care enough to notice and fix the issues themselves.

We also check the performance diffs weekly to catch hidden regressions.

IF YOUR ORGANIZATION DOES NOT VALUE AND REWARD PERFORMANCE IMPROVEMENTS, NONE OF THIS WILL WORK. Your engineers will see the real incentive system, and resist performance improvements. Personally, I don't believe that Atlassian cares at all about performance, otherwise it never would have gotten this bad. Engineers love making things faster, and if they've stopped optimizing performance it's usually because the company discourages it.


It’s absolutely baffling for one of the leading tooling providers in the software development cycle they do not have internal competency to...track if their tool actually works or performs, and beyond that they didn’t proactively reach out to customers to at least get verbal direct knowledge while the telemetry was stood up.

Instead they “prioritized” a legal change in the ToS.


Know what would be great? Markdown support. The WYSIYG is full of bad assumptions and has been forever. In the beginning, we could at least opt out but that's long gone. I actively encourage companies I consult for to use anything but confluence because it seems to be designed specifically for the lowest common denominator with no allowance for people who work faster with a keyboard.


Yes, agreed. I get that confluence is supposed to be for everybody, not just programmers, but there are plenty of great WYSIWYG implementations that use markdown under the hood (and make that accessible). If I could clone the underlying git repo (or an abstraction or something) like Gollum allows, I'd advocate for confluence everywhere. As it stands now I advocate for either markdown/asciidoc in the code, or gitlab wiki (which allows doing that).


followup questions: what level of support are you looking for, and what would be sufficient?

1) markdown macro (limits markdown to the body content, and not interacting with other macros or styling)

2) copy/paste markdown -> autoconvert to WYSIWYG (limits markdown to copy/pasting, so no editing Markdown inside)

3) "markdown pages" (something like 'this page is Markdown only, no WYSIWYG)

I make no comments/promises on any of these becoming real but looking for what's most valuable


Not who you are respondin to but some thoughts

1. Don't be Slack / Teams :)

What I mean by this is if you support it, support it correctly rather than "markdown as keyboard shortcuts" that some products do where if you make the simplest of editing changes, your "markdown" isn't recognized.

2. Devs want to consistently use Markdown across their entire suite

At my last job, we were switching to Azure DevOps. They support Markdown in the developer workflow (PRs) but rich-text-only for Tasks (to be non-dev friendly?).

At my current job, I'm involved in developer productivity and have been interviewing people to collect input for where we want to take development. We currently use Phabricator which uses Remarkup. This is a source of frustration because its not quite the markdown they use everywhere else.

From these, I'm thinking Markdown Pages would be the top choice since it allows developer-only interactions and marketing-only interactions to stay within what they are comfortable with,


Thanks for the detail! Point (1) is very interesting

on (2) -> it sounded like you're saying (a) 'across entire suite identically is good', but that also (b) 'if there's a clear separation of Dev vs nonDev apps/docs, having different support is okay', did I read that right? please correct me if wrong.


The correct answer here: use the same editor in the entire suite, everywhere. Give the user a setting to decide if they want that editor to be WYSIWYG, or markdown.

In my personal opinion, Atlassian's rich text editor (or, seemingly, the 4-6 different one's you guys have built) is unforgivably terrible, so true markdown is the way to go if having both isn't an option.


As a programmer, being able to have something like what GitHub does (Markdown text, you can preview what the rendered version looks like) would be great. I assume that people who are less technical would like a more WYSIWYG editor but as long as the ability to write straight Markdown is there I am sure that I wouldn’t care. Oh, and make sure it’s actually a reasonable subset of Markdown, not like Discord or Slack where you support bold and italics but mot much else.


Thanks! I'm not sure this is the most doable thing (I'm not familiar with the technical aspects of the editor storage format) but can definitely discuss with that team.

The "reasonable subset of Markdown" is also a very useful specific detail, exactly the kind of specificity that helps us do our jobs.


There's CommonMark, a standard for Markdown. You could make that your goal, instead of feature parity with any of the proprietary implementations like Github.


FWIW, check out the new Reddit interface. For all the well-deserved hate it's getting (some of which for the same reasons Jira is), it showcases that you can have a dual WYSIWYG/Markdown editor that works, and allows switching back and forth between modes during editing.


Just normal Markdown for all edit fields! GitHub manages to have a single (almost) Markdown format everywhere. Could we have that in all Atlassian products. There’s no need for your own format.


The biggest request I have of JIRA aside from performance is that whatever markdown/markup language you choose, that it be consistent across the entire product.

It is so frustrating to have to remember to use double-curlies for inline code/pre sometimes, and backticks other times. In each case, the other doesn't work. More than once using one has resulted it rendering correctly on the immediate page but not on the next.

This is one I encounter regularly but is far from the only inconsistency in support.


How does this relate to the performance issues being discussed?


In the case of Confluence specifically, it makes the cost of experimentation a lot steeper.

Confluence's WYSIWYG editor will often make changes that can't be reversed with "undo" -- especially those involving indentation. Copy-paste frequently screws up its formatting as well.

So if you don't want to risk losing lots of work, you have to make many smaller changes. With each change taking a few seconds, it adds up quickly.

If it was markdown or some extended set of it, that wouldn't be a problem.


We're trying to collect and fix such occurrences, so if you have something with specific repro steps please send them to me and I'll make sure they get to the right team.


The WYSIWYG editor makes it extremely difficult to give repro steps because formatting information is hidden from the user and is not perfectly preserved during copy-paste.

More generally, the issues I see reported only ever seem to be fixed in the Cloud version. I currently have to use the Data Center version.

Why go through the trouble of reporting an issue I'll never see fixed involving a feature that I loathe using?


> More generally, the issues I see reported only ever seem to be fixed in the Cloud version. I currently have to use the Data Center version.

> Why go through the trouble of reporting an issue I'll never see fixed involving a feature that I loathe using?

I find the same issue with Nessus Professional. They too are trying to funnel everyone into using tenable.io (SaaS nessus scanner). And they also are very resistant in doing much of any changes (other than removing the API) from their onprem solution. But tenable.io keeps getting regular updates.

Worse yet, when you talk to anybody there, the first thing is "Why arent you using tenable.io ?" My response every time is, "Has it been fedramped YET?".. Of course it hasn't. Not entirely sure they even plan on doing that.

For maintaining data integrity and preserving a secure environment, these companies are demanding that we open our networks and store our critical data somewhere we really don't have access to.


I always write my stuff in Word first, then paste into Confluence.


I do that too when creating a new page.

Editing an existing page is more perilous. I sometimes write out my changes in plaintext first, and then paste and format it into the doc piece by piece. Even that can get janky, though.

It's like Confluence is punishing my attempts to write documentation.


Not GP, but I'd say because it's heavy, and if you didn't have to use it (as there are alternative methods) then you don't pay the performance penalty.


Half the UI uses textile and half the UI uses Markdown. Shipping two parsers & renderers for the app makes it a lot bigger.


Don't know if that's what OP was thinking, but a WYSIWYG editor is normally slower than a plaintext editor, so it could make the product more bearable.


> Would you be willing to share more specifics,

You've got to be kidding me. Have you used your own product? Going from my carefully tuned Server install to cloud Jira or Confluence is a night-and-day difference. The Cloud product is virtually unusable in comparison for any heavy Jira user.

You don't need "specifics", you need your performance engineering team to literally take over the entire company for 6 months to a year. No new features - nobody fucking needs them, the features 99% of your users use have been in the product for 5+ years already. Whatever you're PM'ing, cancel it, it's a waste of time in comparison to making the product not suck. The biggest source of losing users to some other product is going to be the sheer pain of continuing to use Atlassian....

Just make it usable. Halve the number of requests. Cache more things client-side. Do more server-side pre-processing so that a round-trip is not needed when I click on a menu.

I'm not looking forward to when I am forced to migrate my users to a more expensive and less performant experience. I and hundreds of thousands of other administrators will be experiencing months of user complaints because of the forced migration as it is; this is Atlassian's real chance to make it suck less in time.


> Would you be willing to share more specifics

That's something you could easily figure out yourself. E.g., just grabbing some random JIRA:

https://hibernate.atlassian.net/jira/software/c/projects/HV/...

Opening an issue in that tracker takes 24 seconds for me. Twenty-four.


9s to load that page. 3s just for `jira/software/c/projects/HV/issues/?filter=allissues`, which is 194 lines of HTML (with some scripts) but the bulk of the 3s is just content loading.

Wow.


Scroll up and down in the list.

They polyfill text!

That's just absurd.

To add a data point, I get 26 seconds on gigabit fibre that is 3 ms network latency away from "hibernate.atlassian.net".


Average of 15s for me, including watching the issue sidebar show a bunch of fields, with "5 more fields" and then refresh and remove that option.

I started using JIRA in 2008. It was faster then.

If this is what the "cloud" version is going to be, then we will be looking for alternatives, even though we're an Aussie company and I would like to be supportive.

As for Confluence, the wiki is just ok. The editor is clumsy and occasionally I have to go into raw HTML just to get highlighting, bolding etc to work.

If Atlassian is going all-in on cloud, then it needs to realize that cloud isn't "run our software, but not on-prem". Just like MYOB had to learn, it needs to be rewritten so that the web front end is streamlined and cached separately to the underlying API.


wow - clicking on an issue in that tracker was terrible. after 3-4 seconds I thought it had finished loading, but then the UI pulled a bunch more stuff in, and it didn't finish loading the image in the issue description - the most important part of the page - until 24.87s.

On 100Mb fiber...


31.2 seconds, on a not great internet connection (cell)


Other commenters have hit on this already, but the worst one that bites me all of the time is this one:

1. I click a link to an issue 2. I need to do something on that issue, so I attempt to click on a particular section to go make a modification 3. Bam, some background script has loaded, some new piece of content was shoved in, and what I clicked wasn't the thing I was expecting to click

Also, certain interactions within JIRA take far too many steps, and each one takes far too long to load, so it makes me dislike JIRA even more.

Project managers love JIRA, but engineers don't, because each time you make us wait we are less inclined to deal with the software that PM's need us to use so they know how things are going, so instead we get more meetings. If JIRA were fast, we could cut down on meetings.

Please make JIRA fast.


> Project managers love JIRA, but engineers don't

I think this is the core issue. It is simply not designed to be useful for developers, it’s designed for managing developers.

It’s the same issue with time reporting tools. The UI for entering data is just there because it needs to, but it’s not the central selling point of the software.

The UX for the data entry is just not designed to solve any problems besides accepting the data required for the reports that are the real product.


this "trying to show concern" is just fake. Atlassian have a ticket tracking system for their problems. They just ignore so much of the big hard problems, they close tickets with hundreds and hundreds of people on it explaining a multitude of core problems. Coming on HN is just trying to spin it for PR purposes is just not going to work, thread after thread on HN just shows that many many people have been BURNT by Atlassian products. However, I will say, Confluence has improved, but so many things still suck about it, including it being sluggish, and a search that seems really brain dead.


The topic of poor Jira performance came up yesterday, and I did some quick benchmarking of Jira cloud using the best-case scenario for performance: A tiny amount of data, no complex permissions, a commonly used form, no web proxy, no plugins, same geo region as the servers (Sydney), gigabit fibre internet(!), etc...

I spun up a free-tier account and created an empty issue. No data. No history. Nothing in any form fields. As blank as possible.

The only positive aspect is that most of the traffic is coming from a CDN that enables: Gzip, IPv6, HTTP/2, AES-GCM, and TLS 1.3. That's the basics taken care of.

Despite this, reloading the page with a warm cache took a whopping 5.5 seconds. There's an animated progress bar for the empty form!

This required 1.2 MB of uncacheable content to be transferred.

With the cache disabled (or cold), a total of 27.5 MB across 151 files taking 33 seconds is required to display the page. This takes over 5 MB of network traffic after compression. (Note that some corporate web proxies strip compression, so you can't rely on it working!)

For reference, it takes 1.6 seconds on the same computer to start Excel, and 8 seconds to load Visual Studio 2019 (including opening a project). That's four times faster than opening an issue ticket with a cold cache!

Meanwhile, the total text displayed on the screen is less than 1 KB, which means that the page has transfer-to-content efficiency ratio exceeding 1000-to-1. This isn't the animated menu of a computer game, it's a web form!

To render the page, a total of 4.35 seconds of CPU time was required on a gaming desktop PC to with a 3.80 GHz CPU. Having 6-cores doesn't seem to help performance, so don't assume upcoming higher-core CPUs will help in any way.

A developer on an ultraportable laptop running on battery over a WiFi link with a bad corporate proxy server in a different geo-region would likely get a much worse experience. Typically they might get as little as 1.5 GHz and 20 Mbps effective bandwidth, so I can see why people are complaining that Jira page loads are taking 10+ seconds!

In perfectly normal circumstances your customers are likely seeing load times approaching a solid minute.

PS: I do development, and I've avoided Atlassian products primarily because there's been a consistent theme to all discussions related to Atlassian, especially Jira: It's slow.

Stop asking your customers if they're running plugins, or what configuration they're using. Start asking yourself what you've done wrong, terribly, terribly wrong.


And if you are on battery only, with Wi-Fi over tethering, try to find relevant issues to solve problems of your customer... Or trying to file new Jira on the same setup... It’s so painful


In another YC News thread I was confused by people complaining about their text editor or IDE latency, because I've never had that problem with any editor, despite being a gamer and very sensitive to even tens of milliseconds latency.

A lot of people explained that they do development using a Macbook Air on battery. Those are an order of magnitude slower than a plugged in desktop PC. Twenty milliseconds for me is a two hundred milliseconds for them!

Similarly, many outsourced developers are forced to work on cloud VMs that are not only relatively low-spec (1 or 2 cores), but outright throttled, such as the B-series Azure VMs.

Web developers at ISVs like Atlassian are also "spoiled" by having essentially unfettered LAN connectivity with 1-5 millisecond latencies. Worse still, they'll do development with the server component running on their localhost, which is basically cheating.

Real enterprise networks have at least two firewalls between end-users and the Internet, and at least one web proxy, which more than likely supports only TLS 1.2, HTTP 1.1, Gzip, etc...

We have customers that have less Internet uplink bandwidth for 15,000 users than I have for myself at home.

But all of this is immaterial to Jira's performance woes. It's slow in all circumstances. There is no way to make it fast. Not even liquid nitrogen cooled Zen 3 CPUs running at 8 GHz could bring the page load times down to what I would categorise as acceptable.


> A lot of people explained that they do development using a Macbook Air on battery. Those are an order of magnitude slower than a plugged in desktop PC.

Indeed. This was a surprise for me too on Windows laptops. I think this was a change in the industry's approach to power management that happened some years ago, because I don't remember spotting these issues with the first two laptops that I've used. But in the past few years, input processing latency has became a telltale sign that I'm running on battery, on a "maximize battery life" profile.


It's hard to name a single action I can take in JIRA which does not feel unacceptably slow. However, these are the actions that cause the most issues for me due to being used most often (JIRA datacenter, MBP 2019 with i7 + 32gb ram):

1. Viewing a board. This can take 10+ seconds to load.

2. Dragging an issue from one column to another. This greys out the board, rendering it unreadable and unusable for 5-ish seconds.

3. Editing a field. I get a little spinner before it applies for 2-3s for even the simplest edits like adding a label.

4. Interacting with any board that crosses multiple projects. A single project board is bad enough, as in point 1, but we have a 5 project board that takes 20+ seconds.

Actually, I found an action that's pretty ok: Search results are fast, even if clicking into any of them is not. I'm not sure why rendering a board is so different performance wise.


Thank you so much for the details! This is very helpful. I will pass this along to my Jira Perf colleagues (there's multiple of them, since they know Perf is such a big issue).

Just to clarify on Search though, which search are you talking about:

a) quick search (top bar) b) issue search (the one with the basic/JQL switcher) c) something else

Trying to narrow down the latter "even if clicking into any of them is not" part to understand which view that is


Do you guys not use the tool? How is this news to you? Couldn’t you self generate this issue list by just sitting in a locked room for 45 minutes and writing down everything you know already doesn’t work?


Doesn't Quick Search lead into issue search when you press enter?

I think I mean issue search.

By clicking into them, I mean actually loading the issues is slow.


If you honestly need details and specifics when Confluence has always been a slow mess to the point of being unusable, then maybe Atlassian needs a PM in charge of performance metrics first and foremost?


Navigation is sluggish across the board in both Confluence and Jira.

Not just the Cloud service, the self-hosted versions are also painfully slow no matter what resources you throw at them.

That makes the other UX issues worse because the feedback loop has so much lag.


Good news about the self-hosted version being slow: Pretty soon you won’t be able to run self-hosted Atlassian products.


Their Data Center version will still be available.[1] That's what we're currently on.

Data Center includes a few performance-related features like being able to run multiple frontends. I think we're running 4 instances right now. It's still really slow, even when nobody else is using it.

[1]: https://www.atlassian.com/blog/jira-software/server-vs-data-...


I know Jira has always had scaling issues. I used to work for a very large company in the early 2010s that had, I think 5 separate Jira instances. But they were I think dealing with on the order of ten thousand daily active users per instance.


> Navigation is sluggish across the board in both Confluence and Jira.

Also in Bitbucket. It used to be super fast, but recent changes made it very slow.

My team loves the integration with JIRA but we're considering going to Github because of the slowness.


Do it.

Jira's git integration is really poor anyway.

Even redmine is better, where git commits that mention the issue get thier own column in the UI, and they ALL show up there, not just the ones with the comment annotation.


@confluence_perf The Cloud applications provide multiple avenues for providing feedback directly in the user interface. Some/many of them are quite invasive (as in part of the screen is taken over with a "rate your experience editing this document"). I have used these avenues to provide feedback many many times over the years with my #1 response always being "focus on the performance". None of those ever get a response and I don't see what reiterating them in a HN post is going to solve. You have the data, please do something with it.


> Would you be willing to share more specifics, such as: - Pages with content X are the slowest - Trying to do A/B/C is annoyingly slow - etc ?

I would... but Atlassian TOS prohibit me from doing so :(


>I'm a PM for Confluence Cloud and we're always trying to make it better.

That's the problem. It's beyond repair since many, many years. You can only make it worse.

Ditch it! Throw it away, and rewrite from scratch. If you don't bungle it again (here lies the risk, as we're still talking about Atlassian) you'll have a better product then ever in one year.


If you’re being rate limited, try emailing the moderators at hn@ycombinator.com and they might be able to help you.


I think this answer from two months ago should give all the insight you ever want: https://news.ycombinator.com/item?id=24818907

And the first answer to your comment in this thread profiling performance for an empty page with almost no data on a small project should give you even more data than you ever would want.

And this one for an empty project: https://news.ycombinator.com/item?id=25616069

However, having personally experienced "upgrades to Jira and Confluence experiences" over the past few years, I can safely say: no one at Atlassian gives two craps about this. All the talk about "We are definitely working on generalized efforts to make 'most/all/everything' faster" is just that: talk. There's exactly zero priority given to performance issues in lieu of flashy visual upgrades which only make the actual experience worse.

> We're trying to focus on frustrating pages/experiences rather than number of network calls and such, because while the latter is correlated, the former (frustrating slow experiences) is really the bar and the higher priority.

Exactly: you aren't even trying to understand what people are telling you. These metrics you ask for and then dismiss entirely are the primary, core, systemic reason for frustrating slow experiences that you pretend are "high priority". No, frustrating slow experiences have not been a high priority for years (if ever).

If you need to do 200 requests and load 27.5 MB of data to display an empty page, therein lies your problem. You, and other PMs at Atlassian fail to understand these basic things, and yet we get platitudes like "performance is our top priority". It is not. You're good at hiding information and buttons behind multiple layers of clicks, each of which needs another 200 requests and 5-15 seconds to execute.

Oh. You're also good at adding useless crap like this: https://grumpy.website/post/0TcOcOFgL while making sure that your software is nigh unusable: https://twitter.com/dmitriid/status/888415958821416960 I imagine all performance tickets get dismissed because no one can see the description even on a 5k monitor


> I'm a PM for Confluence Cloud and we're always trying to make it better.

Can you guys please make Ctrl-S save and not exit editing? My muscle memory is costing me 10+ seconds of load times every time I type a paragraph and reflexively save, getting dumped back to the view mode of a document. The slow load times exacerbate the problem tremendously. I honestly don't know a single product that treats Ctrl/Cmd-S as "Save and Exit" so this is just a baffling UI/UX design decision.


Hi Slackwise,

This is interesting -> I don't think Ctrl+S should be exiting editing in any form, can you describe this a little bit further? When you hit this is it:

a) moving to Preview (you can see the changes you made, but in preview mode not page view mode (the page tree won't be visible)

b) browser returns to the 'view page' (page tree visible), but your changes are not published (you may see a tag at the top "UNPUBLISHED CHANGES")

c) something else (please describe)

Second question: what does the browser's forward/back buttons display after the 'ctrl+S' result?


This isn't the cloud offering, but on-prem. I assume they're the same.

The about page says "Confluence 6.15.2".

When I click "Edit" to edit a page, via `/pages/editpage.action?pageId=`, and hit Ctrl-S, it takes me back to the view mode rendered via `/display/SPACE/Page+Title`.

Absolutely infuriating and wastes an incredible amount of time. I can't get over my muscle memory. The "Save" button in the bottom-right does the same and appears to be what the hotkey activates. No mention of it being a "Save and Quit Editing". Hovering over the button says "Save your page (Ctrl-S)".


Thanks for mentioning that -> honestly I keep forgetting to ask because I work on the Cloud side.

I will see if I can find some server folks to ask around, but obviously can't promise any movement.


I would say the right place for the performance team to apply resources is looking for bugs or missed optimizations that affect everything or nearly everything on the site. Everything is uniformly slow, so there must be a lot of this.


Is this bait?


Atlassian products in general spray out a huge number of requests to dozens of hostnames for even the most basic of actions. It scares me to think how their organisation is structured internally given the outwardly visible result.

If you’d like to see it for yourself, run Little Snitch in alert mode and try to sign into bitbucket.org - it’s almost comical how many hoops your browser jumps through.


I'm very curious about where the slowdown is coming from. Is it mostly JS on the client or Java on the server? When I ran my own Confluence server on a Digital Ocean VM, it was slow but not unbearable. I assumed it was Tomcat's fault* or the fact that I wasn't using a "real" database on the backend (a configuration Atlassian frowns upon).

*Confluence is built on Tomcat. Don't know if this is also true for Jira.

Now that my Confluence server is on Atlassian's cloud, it seems much slower still. So I have to assume it's not client-side JS because that hasn't changed much; there's some kind of resource starvation going on with Atlassian's servers.


Over here, at least, a lot of the time is spent just waiting for content. Even static content from CDNs. I'm guessing they're not geolocating properly.

See https://community.atlassian.com/t5/Jira-questions/How-to-swi...


Tomcat is fast, unless you do slow things synchronously.


Check out Clubhouse.io

I've used many others including Github, GitLab, Phabricator, Redmine, etc., and Clubhouse does a great job IMO.


My org uses Clubhouse, and it's still slow. Probably not as slow as Jira, but it's a running joke where I work.


Interesting! What size org?

We recently passed 10k stories and it still feels very snappy. Also the UI/UX feels very polished.


We're barely over 1k employees, and most of those don't use Clubhouse. Not sure how to tell exactly how many stories we have in CH.


You may be able to tell from the ID #


Is there an on-prem version?


Their FAQ[1] says no, but they would like to gauge interest in that feature and to reach-out to their support team.

1. https://help.clubhouse.io/hc/en-us/articles/360036047091-Clu...


Doesn't Oracle sue if you benchmark them and release the results publicly or is that an old wives tail?

On a YouTube lecture at CMU the Materialize guys seemed to try very hard to not even land in the same zip code as that discussion to the point it was awkward, and they seem pretty smart.

Are these large organizations really that petty? Ad absurdism, would it be legal if Ford said we couldn’t drag race their car after we bought it?


https://www.oracle.com/downloads/licenses/standard-license.h...

You may not: disclose results of any Program benchmark tests without Oracle’s prior consent


If you are looking for a fast alternative, we're building www.kitemaker.co (I'm one of the founders). We're part of the upcoming YC batch, and I'd be happy to help you onboard and import existing projects if needed.


I think future customers would like to know what kind of performance to expect, so a write-up sounds like a great idea.

It sounds like a bad business decision to not support open discussion on performance, it makes me think that they have something to hide.


Cloudflare has this as well:

>you will not and you have no right to: ... (f) perform or publish any benchmark tests or analyses relating to the Cloud Services without Cloudflare’s written consent;

https://www.cloudflare.com/terms/#react-modal:~:text=(f)%20p...


It’s amazing what legal gets away with at most companies and how little actual engineers and management gets to see of this part of their company. I am sure any self-respecting Cloudflare engineer would be horrified to see that this is in their ToS and yet it exists.


Are these terms really that nefarious or just a way to terminate some customer who decides to load test your system and ends up bringing it down? Legal documents are typically written to be as broad as possible.

Perhaps it should be more narrowly written, but prohibiting certain kinds of testing without permission is reasonable.


This is just awful


As a software engineer who has been woken up in the middle of the night while oncall because some random user wanted to run their own performance tests against our system, I can completely understand why companies want to prevent this from happening without their awareness.


a) That's not the reason.

b) Rate limiting is practically mandatory in an era of multi-gigabit client network connections on even mobile phones.

Remember: It's not a DDoS attack if it's one user.


Well, an "average" HN user may use something like a load testing tool such as https://github.com/loadimpact/k6 or at least rent a few VMs with 10Gbps links for a few hours and use wrk/ab so I would not be so sure about (a).

Edit: oh, and then they might dust off that Kali Linux VM lying around somewhere...


Which ought not to matter if you use a CDN and use basic source-IP based rate limiting.


I agree but there‘s a major difference between benchmarking a tool and writing about how slow a tool is. I agree with the first being forbidden, but the lather should be allowed.

For Atlassian Cloud, you won‘t beed a benchmark to tell it‘s slow. A simple (physical) stopwatch is enough.


"Atlassian Cloud ToS section 3.3(I) prohibits discussing performance issues" - Their ToS may prohibit it, but that is in no way going to stop me from doing it - I don't give a shit about some document they write. Atlassian products suck hard and their performance characteristics are horrible. I hate being forced to use their crap at work.


The problem with Atlassian's Terms of Service is that most of their end-users are not paying for the software and do not really care if they violate an agreement they were either forced to make or which someone made on their behalf.


I don't think it applies to us. Our employers can sign whatever they like and constrain us from speaking in official company-related capacities, but we're no more bound to that as individuals then we're bound to anything else our companies sign as individuals. As individuals, we're not in a relationship with Atlassian at all.


Be honest; noone cares about (or reads) ToS agreements.


No one cares to read any Terms of Service agreements, but I think Atlassian's products face another tier: not caring to adhere to them at all.

I'd personally find it pretty funny to go up to my boss and tell them I can't read my Jira tickets because Atlassian banned my account. Or would they ban the entire organization instead? Either way, hilarious.


They will suddenly care when Atlassian locks them out of being able to access anything. Then, we'll see posts on Twitter or here or elsewhere about some user crying about not getting access to "their" stuff on a 3rd party's site.


The thing we’ve constructed of users not owning their content that they post on platforms and then holding the ability to lock them out of access to it arbitrarily is really one of the worst things that has happened to the web.


Legislation is overdue for this. US might be a lost cause, but in EU and elsewhere in the world, this can be greatly remedied.


Or conversely, some might actually be thankful in the long run for being forced to bite the bullet and to find an alternative which raises productivity/efficiency and as a byproduct, usually a happier work environment.


Maybe you'll get lucky and they'll ban you from using their products!


Most enterprise software prohibits discussing benchmarking etc


I read HN every day, I never post, I signed up for an account just to post this. I'm a senior PM at a large e-comm company, I was EXTREMELY frustrated with the horrible performance of Jira Cloud, I submitted support tickets to no avail. It's obviously an over-architected broken system.

I moved my team to clubhouse, even-though it created an element of fragmentation considering all other teams are using JIRA. I'm so happy though with clubhouse, I can actually get my work done without mindlessly waiting for every interaction.

Icing on the cake: I was considering moving the company over to Jira on a self hosted AWS instance, I've read that it can be a little faster. but.. they're discontinuing the self hosted option. Nail in the coffin for me.

Good bloody riddance.


I've tried to convince a particular manager to try GitLab Issues. (We already use GitLab for git/CI anyway).

Seeing that I'm not just the one who thinks it's so ridiculously frustratingly bad-slow [as in, it's impossible to be this slow even after multiple rewrites unless it's some kind of bizarre experiment by Douglas Adams' ghost] will likely lead to us dropping JIRA for good.

Thanks!


Heya Nate I thought it was you when I read the comment just has to look at the username.


This is not surprising: anyone who’s used Atlassian products knows that quality has been job number 97 for years. That doesn’t happen by accident – someone’s made the decision that they’ll make sales anyway and cut the QA budget.

One of the most obvious examples: they have multiple WYSIWYG editor implementations which aren’t compatible. When you format something in Jira it’ll look fine in the preview and then render differently on reload. It’s been like that for years, nobody cares.


This is even worse than the crazy slowness.

I spend 10 minutes to make a detailed bug report, just to have it fall apart after submitting. How does that happen in a software made to show bug reports?

Just use standard markdown instead of your own bullcrap formatting that doesn't ever seem to work.


Best I can say here (as a Confluence PM not a Jira one, and in a public forum) is that customers are not the only ones that experience this pain, and the appropriate folks are notified on a regular basis.

I think the genesis of this is historically many different editable fields might had different modules behind them (not every single one a different one, but maybe a handful of common ones). It looks like for whatever reason we (Atlassian) haven't fully migrated all of them to the newest common editor modules -> I don't know anything for sure though


I’d bet on legacy code which nobody wants to touch and we’ve all been there. I’d recommend having an option to add a Markdown flag which can be enabled for all new records, leaving the old stuff alone, since this usually comes up in the context of people using GitLab instead because it’s so, so much better for technical work. Once Jira is just links to Gitlab, people start asking what justifies the hefty cost for a tool the team doesn’t use.


This is the top response on your profile, so I’m commenting here.

Atlassian is developed and run in Australia, right?

Have you tried running your website and doing the things we mention here from a VPN that goes through the United States or Europe to experience our latency to your servers overall?

I haven’t done any testing myself on this, but if you’re doing serial requests and each request is an https call back and forth to Australia, that’s easily 200ms every request, even if total server time is milliseconds.

And even if you support parallel, if you limit the number of requests per non-whitelisted IP by a lot, it can very easily become approximately serial


Hi t-writescode,

For the most part, each individual top line products' team operates out of a different site, with Confluence operating in Silicon Valley (mostly). I assume this is public information (or derivable) so I think it's safe to share.

Internally we have a few Confluence Cloud instances we use, with a single main one shared by the entire company (basically) which is located in a single area (somewhere?), though the VPN connection points are different.

So from a network topology standpoint our experience shouldn't be too different from most customers (at a very gross level) -> summary is, we definitely have users representative of 'bad networking', but you're right maybe I should be trying to intentionally degrade mine (right now i think its two cross US hops, but could be wrong).

I'll make sure our team looks at this dimension -> I don't think it's possible within our telemetry data, but maybe simulating it will get some interesting results.

I do recall reading once about something about AWS infrastructure (endpoints? edges?) having some configuration that causes omething like this: 1 - first returned packet is some some small size 2 - next packet (after ack is returned) can be double the size 3 - same 4 - same 5 - until max packet size

And though (1) is configurable (in theory), and I think the doubling/size increase is configurable (in theory), AWS does not allow for this configuration in their services.

But I can't find what I was reading so if anyone knows about that (a) being a problem and (b) how to work around it, let me know!


That doesn’t sound like a QA issue. Rather they have too many competing departments that are reimplementing the same things


That’s the cause. Normally that’d be caught in testing — the same input producing different outputs isn’t hard to test — and it’s commonly reported by users. Maybe they each think the other team should fix it but who cares: from the user’s perspective it’s broken.


Right but as QA you can make a ticket but unless you assign it to someone who supervises all those departments it’ll just get closed as “works as expected from where I’m sitting at department A”


Nothing says “great product” like a ToS that bans you from discussing the problems.

I self-host a confluence install, and it’s performance is poor even with a good sized VM and absolutely zero other traffic to it.


"I won't comment on the performance, but let's just say there's a reason their TOS forbids talking about it."

If you ban objective criticism, I guess all you're left with is FUD.


Yep Confluence is an amazingly overly complicated thing! I had it on modern windows server that boots up in 60 seconds. From that time, confluence starting up take no less than 10 minutes before it responds to web requests. Fortunately we switched to Teams this year, no more exorbitant confluence renewal fees and clueless support.


"Yep Confluence is an amazingly overly complicated thing" - yeah,with an incredibly slow editor that screws up even simple page edits constantly. It's a complete shit show. There's a reason we call it "cuntfluence" at my workplace.


SharePoint Server is an absolute beast with a bunch of legacy code yet it has no issues being responsive as soon as it JITs (which can take ~30 second so or so after an App Pool startup).

SPO is also responsive (the last thing to load is typically the SuiteNav which doesn't impact working with the site/content).

I'm not sure why a company like Atlassian would have these persistent performance issues.


It's the Oracle school of public relations via litigation.


Atlassian Cloud customer here. Large enterprise. The Jira and Confluence cloud products are slow as fuuuuuck.


Same, they're slow as a snail high on weed.

I also got no response (or even an acknowledgement) for the feedback I gave. Like most people here, I too am forced to use it at work.


Sorry to hear it's been a frustrating experience. I'm a PM for Confluence Cloud and we're always trying to make it better. Would you be willing to share more specifics, such as: - Pages with content X are the slowest - Trying to do A/B/C is annoyingly slow - etc ?

(edit: looks like HN is severely limiting my reply rate so apologies for delays)


Not GP but I also have the same feeling across Atlassian products:

it is more like a general feeling all the time for many of us.

I'm using the latest Firefox on Windows, a developer laptop with 32GB memory that was brand new this spring as well as 500/500 fiber.


Is this also on Cloud products?

Right now on holiday (through Tuesday), but would like learn more. I see your contact info in your profile, if you give consent I'd like send you an email on Wednesday with some followup questions (the first question is "what is the URL for your cloud instance" so I don't want to be asking for it on a public forum)

(edit: typo)


> Is this also on Cloud products?

On my company's cloud instance of Jira, it's a minimum of a 2-3 second delay to do anything. Edit, wait a few seconds, save, wait a few seconds, change a field, wait a few seconds... and God help you if you need to reload the page because something got stuck.


"My boss told me to generate a bunch of JIRAs in reaction to the recent accurate discussions on HN of how poor our performance is, so I need specific dit-dot issues to buff our team metrics rather than address the cause of the issues, which is a political non-starter"


There's nothing quite like having to spend a load of time justifying everything your team wants to do to the team above because they need to justify the things their teams need to do to the team above them.

Sometimes things are just obviously crap to the people tasked with working on them and having to jump through these kind of hoops eventually leaves an organisation with only the kind of people who are happy to keep doing it.

Nothing makes me stop ignoring recruiter messages quite like being asked to flesh out a JIRA ticket with technical details by a person who has literally no use for the details they ask me to write.


A more charitable interpretation might be “my boss won’t let me fix things unless I have specific comments about problems from people who use the software”.


Translation: His boss is an idiot and their lunch is going to be eaten by a company where management gives a shit about product quality.


If their boss doesn't let them just get to work without thoroughly documenting everything that's a larger problem. As good a time as any to start looking for jobs with bosses that don't suck.


Not trying to be snarky, but... do you use your product?


These are questions I have as well. That said, creating a throwaway and prefacing a comment with "Not trying to be snarky" shouldn't be an excuse for not taking the time to couch questions in a way you're more confident aren't going to be interpreted in a negative way. This isn't directed just at you: I see this behavior all too often: people using throwaways as an excuse to not take the time to express things in a manner that doesn't need to apologize for itself.

Atlassian employees use their products, and tips and tricks they may have to use them effectively, or make the experience of using JIRA or Confluence, or their other tools more enjoyable and useable would be great to know!


If you need tips or TRICKS to make a product useable, you have a BAD product.


I agree, and I also know that you have to deal with the world as it is right now even while you work to make it better.

If you have to use Jira or Confluence at work, you probably want to know how to make that as useful as possible. If you're working at Atlassian, you probably want to make your customer's experience as enjoyable as possible as soon as possible. Ideally you have a great product and great documentation and all happy customers. If that's not the case, you have an opportunity to work on a number of fronts, including improving documentation and the product, and help current customers with the product as it is. You can and should be doing all of these things.

Piling on doesn't help anything.


Some people don't want to use it at all, and don't care for the situation that C-suite everywhere buys Atlassian's trash.

They don't want minor improvements to help it limp along, they want to vent and complain about it.

I think it's impossible that Atlassian evolves into a good product company, but it's entirely possible that my next CTO googled for opinions on the product, found a few discussions on HN with a combined 3,000 complaints about what a garbage fire it is, and went with Clubhouse.


And the thread from yesterday (https://news.ycombinator.com/item?id=25590846 for a site named https://whyjirasucks.com) or any of the other many rants on Altassian around the web don't suffice? I hardly think that this thread is going to be the tipping point. It's not like this is news.

Given your opinions regarding Atlassian, what would you think of your next CTO if they were even considering Atlassian? Is that someone you'd want to work for?


(Perhaps I'm not a good person to ask, as I don't work at a product company or in IT services)

I really wouldn't give it a second thought, as everybody is using this stuff.

In my niche, there is very little focus on this kind of project management, due to the required speed of development and deployment.

If you work fast and reliably enough, delivering commercially important software, nobody is asking about a JIRA ticket.


I know this is like well after the conversation, but if you as a developer have to provide Tips/Tricks to your users to be able to use your product, then fix the software to make the Tips/Tricks un-necessary. You don' need to have employees creating accounts on forums asking for "feedback" or "describe for me the steps needed to re-create the issue". You already know the issues with a list of work arounds. FIX THEM!!! This "hey look at us asking the users" is just a sham as they are not even fixing the most basic of things. Why would I believe they are going to fix some thing that needs help re-creating before being noticed by their devs?


If, in your opinion, the situation is so bad that it's a sham, why comment on this at all? What will that do to make the situation better, for Atlassian, for you, or for anyone else reading the thread?

My original point was that people should engage with each other in a way that's likely to create the most positive outcome. Creating throwaway accounts so one doesn't need to take the effort to be polite or take the time to be explicit about what they mean in a low-bandwidth context like HN is, in my opinion, highly unproductive and lowers the discourse on HN. If we're not trying to make things better, both the particulars of the situation (say, figuring out how to make Atlassian products easier to use) and put us in a better place for solving situations in the future (maintaining HN as a community people want to participate in), I don't think we should comment at all.

The throwaway I responded to hasn't participated further, and everyone else seems to have focused on the "tips and tricks" phrasing I employed, so one last attempt to elaborate:

Hypothetical scenarios:

1. You have to use Atlassian products at work for reasons that are out of your control. Your options:

- figure out how to make using the products as easy as possible.

- refuse to use the Atlassian products

- figure out how get your organization to change to another product.

- find a new job.

I'd argue only the first one is in your control as something you can do today on your own.

Some options for "figuring out how to make using the products as easy as possible":

- Complain on a forum that Atlassian products suck. The venting may make you feel better, but won't really improve the situation.

- Engage with an Atlassian employee

If you don't believe Atlassian is going to actually do anything, you might as well not make the effort of engaging at all. If you think there's a possibility you might find some relief by doing so, set yourself up for success. Snark and aimless, general complaints are unlikely to lead to a successful outcome, and I think it's likely to actively increase your likelihood of failure.

2. You're an Atlassian employee.

Note: If you don't believe Atlassian employees are operating in good faith, you can skip this section--and, for that matter, everything here. Get your company to switch tools, or quit.

Atlassian developers--just like any other developer--want clear, reproducible bug reports so you can fix the bugs (including slow performance). You want to know (a) what they wanted to accomplish; (b) exactly what they did; (c) what they expected to happen; and (d) exactly what did happen. If you want support, supply this information even if they don't ask for it 'cause that's what they need.

Fixing bugs takes time. Adding features takes time. Improving documentation takes time. All of those things should definitely be done. If the fixes are trivial, of course prefer the fix over the tips and tricks. If "tips and tricks" can work as a stop-gap to while these other things are worked on, by all means Atlassian employees should offer them and those using Atlassian products should use them if they want some relief now.

Time is a finite resource, and you need to figure out ways to move forward the best you know how. Your customers are likely diverse and have while they likely share some priorities, others are going to be different. Choosing to fix bugs A, B, and C while moving forward with features M, N, and O means that bugs D, E, F, G, H, and I and features P, Q, R, S, and T aren't going to be worked on, at least right now. And your customers that really want A, B, and C fixed and your customers who want features M, N, and O are going to be so grateful, and your customers who really a bitten by bugs D and F are going to be out in the cold, as are customers who want features P and Q. But if you can give customers affected by D a workaround in the meantime, I think that's better. That's just how things are, at any shop. And not just in software development.

If your priorities as a customer don't line up with those of the company whose product your using, your options are to wait, find a workaround, convince to the company to reprioritize, or find another product.

Focus on what you can do rather than what others should do. If you rely on the actions of others, it's still the same: what you can do to help others do what you want them to do--in a way that maximizes the likelihood of success.

The short version of all of this is engage with each other in good faith. If you don't believe the people you're working with are doing that and you don't think you can change it, it's really not worth continuing to engage with them, positively or negatively.


Well, you certainly need "tips and tricks" to make Arch Linux fully usable. Every powerful tool needs to be adjusted to its use (Github is full of dotfiles and macOS bootstrap repos). Doing so is a sign of professionalism (craftsmanship).


I think that’s saying more about Arch than anything else. While it’s not wrong that people add dot files to track preferences, those aren’t necessary for basic usage. Someone using macOS without customizing anything still has a good experience.

A key distinction is between domain complexity and what the product adds to that intrinsic complexity. If you use GitHub or GitLab issues with no customization you’ll have a better experience than a Jira user because they work well out of the box without requiring customization or adjusting your workflow just to accomplish core tasks.


You are right, Arch has a poor product experience.

On the other hand, I do not view those systems as products when it comes to professional use. A non-techie might see a Mac as a fancy laptop but for me it's a tool. Just like an HSS cutting tool in a lathe, you'd carefully maintain it (you don't want to use a dull cutting tool) and tune it. Just like you want to regrind a cutting tool depending on the part you are machining, I disable Intel Turbo Boost using http://tbswitcher.rugarciap.com when I run long perf evaluations of my programs for academic projects. If M1 based Macs will not allow me to disable their boost clock functionality, they will be unsuitable for my work as a tool. When that happens, you simply pick the most fitting tool. Not necessarily switching dev platforms, in this case a separate machine running Linux for the eval may be OK.

Regarding issue trackers, there are still some things I miss about Bugzilla after moving to Github issues such as sorting issues by two fields in order (eg first by prio and then milestone). I similarly like complex queries that can be saved in YouTrack. I will admit that 5 years ago I thought that Bugzilla was ugly (it still is) and not user-friendly (one of the worst) but now I simply see it as a professional tool that does not get in my way once I learn how to use it. On the other hand, most of the tools with proper UX (not all, most notably airplane cockpits have proper UX but still do not get in the way of a pilot doing their job including manual overrides for all kinds of malfunctions) have some "user journey" which gets in the way of almost every pro user.


As a product manager, are you allowed to discuss performance and benchmarks without violation of your contract? Or, is it just customers that are prohibited from this?


Come on, everything is slow and you know it.


We have certainly heard from some customers that agree that 'everything' is slow, but we've also heard from other customers saying they have no problems.

We would love to fix "everything", and we have some longer term projects focused on this -> However, "everything" fixes seem to be a more incremental boost and also take longer time to complete.

If you have any feedback about "specific" items that are the most frustrating, we'd love to hear about those -> targeted fixes for specific can be much faster, much greater gains, and usually offer better user experience gain/engineering time returns.

If not, I can only say that we are definitely working on making 'everything faster'

(edit: trying to reply but looks like HN is limiting my reply rate)

(edit: maybe I can post my replies here and hopefully they'll get read)

------ @rusticpenn - This is definitely possible that 'some users are just used to it'. But we also see a very wide variance in individual customers' performance numbers (ie. some instances have consistently faster performance than other instances), and even within individual instances variance amongst users (some users have consistently faster experience than other users on the same instance) -> we're trying what we can to narrow down the causes in this variance.

Hearing from "users with slow experiences" is simply one of the ways we're trying to track this down, but it helps if users are willing to provide more info.

--------

@ratww - thank you for the suggestion! We have some amount of data that helps us see what might be different between instances, but haven't gone out of our way to 'interview a fast customer', I'll bring this up with the team to see.

The two biggest factors I think we've seen: slow machines can contribute (but not a necessity), and large pages (especially with large tables, or large number of tables) can contribute.


> We have certainly heard from some customers that agree that 'everything' is slow, but we've also heard from other customers saying they have no problems.

What do your metrics show? I instrument my web sites so I know how long every operation – server responses, front-end JS changes, etc. – takes and can guide my development accordingly. You have a much larger budget and could be answering this question with hard data.

I’ll second the “everything” responses. Request Tracker on 1990s hardware was considerably faster than Jira is today - and better for serious use, too.


Hi acdha,

We have metrics, but of course as with many such things you always want more insights than the amount of data you're collecting (so we're always trying to grow this as appropriate).

This data is what led to the above (added edit trying to reply to @rusticpenn) saying we can see that "some instances are slower than others", and "some users are slower than others". I can't share those numbers of course though.

However, privacy reasons does prevent us from collecting too much data, so differentiating why individual users might have different experiences (even when other known factors are similar/identical) is difficult.

Also I'd be happy to take any suggestions you have about what to look at back to my engineering team, if you're willing to share other ideas. I know we're tracking several of the ones you mention but more options is always better.


I mean, it really is everything. If it were my project I’d make sure I have telemetry on all UI actions and would then set a threshold (say 200ms), triage everything which has a percentile consistently over that threshold to look for easy fixes, and then set a policy that each release only improves on those numbers. I can’t think of any user-visible changes to Jira or Confluence in the last 5 years which I wouldn’t trade in a heartbeat for good performance.


Hi acdha,

Thank you for the advice - how about on the page load side? Our biggest problem is probably the variance issue (mentioned in other subthreads) -> we can't easily tell what is the difference between a slow and a fast load in many cases.

Even if we compare things that are available from the metrics like CPU, Mem, and Network speed, those are not very granular metrics (for example, to understand someone with 16 threads was actually at 95% Mem used during that page load), and have little correlation at a wide level with page load speed.


I'm sorry, as a JIRA user since 2008, your software has always been slow. I used to like that I could run your software on prem and configure issue fields etc, but now, you have so many layers of crap and "pretty" that its not suprising you can't tell what is fast and what isnt.

It is not your customers job to instrument your software. Your API gateway can provide precise and accurate figures on how long API calls take and there is nothing stopping you adding web page metrics that can provide client-side measurements as well.

Some examples are available on the publicly visible JIRA boards, like the one for hibernate. Just go click on all issues and then click on any issue in a private browser window and with the cache empty.

Every one of the fields take seconds to load. That is not internet roundtrip time, that is your backend. Even when the issue is ~80% loaded (according to your own page load bar), there are still JS scripts that will load and reformat the page, causing the browser to reflow.

These are not cached, because loading another issue doesn't resolve the problem.

So there are fundamental front end problems that have nothing to do with the servers or backend, they are entirely a problem of the JS and the in-browser activities.

Fix them.


> but we've also heard from other customers saying they have no problems

I am sorry, there are probably customers who are used to the tools. Maybe they don't pass the Mom test. That point comes out as unnecessary defense here.


> but we've also heard from other customers saying they have no problems

Can I suggest following up with those customers to see if and how they're using the product, what's their computer configuration, if there's anything special about them?


There's a bit of verbal sleight-of-hand going on - probably not intentionally.

"This is very slow"

"We have no problems"

These aren't addressing the same things, really (unless the OP was translating "we're happy with the speed of the entire system" as "we have no problems").

Are the people reporting "no problems" actual end users. People I know who've become acclimated to Jira that I know would happily respond "we have no problems" while the people below them who have to use Jira 10x more often (multiple times per hour, vs a daily look at progress, for example) would happily say "this is slow as molasses (and that's a problem)".


Jira obviously has systemic performance problems that a few users have fast enough computers or networks to push through. There will be no shortcut to "targeted fixes" for "greater gains".

Persistently asking for particular workflows, as you've been doing throughout this thread, shows a failure to understand the scope of the problem. In fact it makes me wonder if your paycheck depends on not understanding it. It sure seems like someone's does.


So the language highlighted here is:

(i) publicly disseminate information regarding the performance of the Cloud Products;

But can we take a minute to talk about the combination of these two :

(h) use the Cloud Products for competitive analysis or to build competitive products;

(j) encourage or assist any third party to do any of the foregoing.

Does this mean as a jira user I can’t help build a jira competitor for a client of mine ( we are a tech agency). If this is the case I would really have a hard time using jira and being compliant. After all, who is the arbiter on what jira is exactly?But does this also mean people can’t do reviews on the platform as means to compare to other platforms? I’m a bit speechless here, it’s a wtf sort of thing.


Imagine if products prior to the 90s or outside software had these sorts of “agreements.” Every store in a mall would have them and every piece of fruit in a grocery store would require you to agree to arbitration clauses and privacy policies and non disclosure and non competition. Consumer reports and class action suits would not exist, and nobody would really be allowed to talk about it because of the NDAs. Automated facial and voice recognition in smart home devices could sell data to companies to enforce it. The news would not be able to talk about it. It would be a good setup for a dystopian movie, no?


Not just Jira, but also all Atlassian products which includes HipChat, Trello, OpsGenie and a load of other products.


Also they changed in MIDFLIGHT. At least only apply it to new customers who have a choice whether to be forced to do this.


Just so everyone is aware, this is Atlassian's stance on that language taken from the internal guidance on the ToS. To be clear I'm not defending this stance as I think it is flawed. But I wanted you guys to know what Atlassians are told about it -

------------------------------------------------------

Section 3.3: Benchmarking Can you explain Atlassian's stance on Benchmarking?

Like many other software companies, Atlassian has this language in its terms to protect users from flawed reviews and benchmarks. By requiring reviewers to obtain Atlassian’s consent before publicly disseminating their results, Atlassian can confirm their methodology (i.e. latest release, type of web browser, etc.) and make sure they deliver verifiable performance results. Atlassian is not trying to prevent any users from internally assessing the performance of our products.

The language related to the public distribution of performance information has been included in our customer agreement since 2012.

Customers can obtain Atlassian's consent by filing a Support ticket. The Support engineer will then need to bring in a PMM for approval of the data/report.

------------------------------------------------------


That explains the stance but is not sufficient to justify it. It gives Atlassian infinite power to stomp on any benchmark that shows poor performance under a claim that it is flawed. It is also irrelevant that it's been in your ToS since 2012: Precedent or longevity do not make consumer-unfriendly restrictions acceptable.

This is implicitly recognized by allowing internal assessment: That assessment would be just as vulnerable to flawed methodology and therefore flawed decision making on products. If you were that concerned over such issues, you could issue further restrictions on performance assessment that limited such activity to be conducted only under Atlassian's close review or using your own mandated methodology. One reason you probably don't do that is because potential buyers would balk at those restrictions and either pass on your product or responsibly engage in due diligence and perform their own assessments regardless.

Further, the resources to do extensive internal assessment may be lacking in many organizations, which means your provision to allow internal testing is meaningless to many customers. As a result, the prohibition against public disclosure thereby deprives them of any way of obtaining objective external analysis.

You could satisfy your concerns by requiring that public disclosure be reviewed by Atlassian prior to publication. You could require an option for Atlassian to comment on the results with embedded notations without restricting publication itself. That would still be heavy handed but at least allow a reasonable amount of independent review of your performance.


Those suggestions in your last paragraph look very reasonable to me. At GitLab we explicitly allow performance testing as our ninth and final stewardship promise https://about.gitlab.com/company/stewardship/#promises But I recognize there is a trade-off and companies can reasonably but the balance at different points.


There is no tradeoff.

DeWitt clauses are corporate censorship and are 100% self-serving.

There is zero benefit to consumers having no benchmarks at all available for entire product categories.

There is an enormous benefit to corporations to be able to silence critics with the threat of bankruptcy via lawsuit.

This is big corporations using the law to bully journalists and citizens, nothing more.


Pretty much, yes. That's why I included an example of how they could allow transparent assessment while still addressing their stated concerns. Because it shows the stated concerns, in having a solution, are just a BS smoke screen. (Not that my proposed scenario is still the best case for consumers, just that there's a way they could allow public benchmarks without sacrificing their concerns)


> Atlassian has this language in its terms to protect users from flawed reviews and benchmarks.

The solution to lies is not to censor, but transparency.

Atlassian has all the resources in the world to answer any external benchmarks done by third party.

If you can hire an army of lawyers, surely its possible to have a full-time engineer running benchmarks.


It's pretty clear in context that they are discouraging detailed performance benchmarks as a proxy for reverse engineering. Not some kind of gag on complaining that "JIRA is too slow".


> protect users from flawed reviews

Perhaps California will address this problem by banning "performance benchmarking platforms" from listing evaluated products without an agreement from the vendor... [1]

[1] https://news.ycombinator.com/item?id=25601814


Both cloud and on prem user here. Both systems are slow, hardware doesnt scale. Simply browsing issues easily takes 2 to 5 seconds per click. Confluences is okay i guess, just the slowness is annoying. That, and trying to to format text properly.

When youve worked with azure devops or github before, atlassian tooling is really a blast from the past.


Yep. I was a company official JIRA and Confluence on prem JVM restarter, mail queue unclogger and schema unfucker for 5 years. When they moved it to Cloud it was the best day of my life because it wasn’t my problem any more even though the performance was even worse.


If you start using Linear you won't need a benchmark to notice the difference.

This is what happens when the "clueless" start "innovating". I've had several conversations over the years with members of Atlassian technical teams. They always wanted to work on performance, but never allowed (priorities).

They are in "good" company (Oracle, China, etc.). What's preventing anonymous performance benchmarks, though?


> I've had several conversations over the years with members of Atlassian technical teams. They always wanted to work on performance, but never allowed (priorities).

For what it's worth, it's a pretty significant company priority now. I recently had my project (dark mode) cancelled[1] so we could dedicate engineering effort to performance.

[1] https://jira.atlassian.com/browse/JRACLOUD-63150?focusedComm...


Linear works great on chrome for me, but trying to use it on Firefox, it's hopelessly broken. Clicking 80% of things does nothing.


what do they work on? how many other features can you bolt onto an already mediocre wiki? or is it just bugfixes, keeping up with browser quirks, and useless UI "redesigns"/reskins?


As awful as confluence is, it's not really a mediocre wiki. It's the best I've used. (Out of confluence, notion, mediawiki and some god awful internal thing based on wordpress). It's interface sucks, the performance sucks, the editor sucks, search sucks, and it's still the best.


Point still stands, right?


I don't think so. If it's mediocre, then it's mediocre compared to something else, but in reality it's best-in-class. It just turns out that the opposition isn't exactly bringing it's a game.


so what do they work on?


Every Atlassian product I've used has had scalability problems. Instead of trying to hide them, they should work on fixing them.


If only there was a product that helped size up work and let teams manage a backlog of features. :D

But I agree totally.


I have no idea if it applies in this case, but sometimes terms like this are there because competitors have them.

You have product X, and your competitor Y publishes whitepapers and ads comparing their performance to yours, showing yours is terrible. You think they rigged the tests, and want to publish your own X vs. Y comparison but Y's TOS prohibits it.

Once one major Y does this, many others follow suit as a defensive measure against Y.

I seem to recall seeing some where X's TOS would try to limit the prohibition to just that situation, such as saying that you could only publish X vs. Y performance comparisons if either there were no restrictions on others publishing Y performance comparisons or Y granted X an exception to Y's prohibition.


We've established Jira/Confluence/etc. are crap.

Now, let's talk alternatives.


Why is this not higher?

Every atlassian thread should include a sticky "Go here for an alternative" post


I have yet to use a JIRA or Confluence system that isn't almost insufferably slow while also suffering from a terrible UX. They seem to be the IBM of project management and documentation in the software world, though.

Also for users not acting in the capacity of a representative for their employers who purchase or install JIRA/Confluence it is perfectly fine to discuss performance issues such as the above. The law isn't as dumb as some seem to think.


This is certainly a strange thing to have in there, as we've had public discussions about performance before and I assume no one's accused those customers of violating the ToS.

I'll see if I can find someone inside Atlassian to talk about this part of the ToS

(edit: Looks like other users have found similar clauses in other companies, so it seems like it might be standard legalese. Will still see if I can find out more)


How bad has your performance to be until you explicitly prohibit people using your service talking about the performance issue?

I‘ll wait until they discontinue the service for a customer because he did not respect this ToS section.


Jira was a disaster for our team, but the response we got as a major government contractor from Atlassian was so bad that we swore off all Atlassian products. When I would bring performance issues to the technical support team it was like trying to convince Apple there was a problem with the butterfly keyboard. Like talking to a brick wall.


I'm more concerned about 3.3(c) "[you will not] use the Cloud Products for the benefit of any third party".

Surely if I'm tracking bugs in Jira, that is to the benefit of my users?

What if I am using post-it notes to keep track of a client's request and they demand I use Jira instead because I keep losing the notes. It's extremely to their benefit that I use the cloud services...


I have not recently used confluence myself, but having created an open source project in the same space I have seen a few users migrate from confluence quoting speed as their main frustration & reason for migrating away. Has been the case for a couple of years so I'm surprised it has not been more of a focus for them.


Someone went to the trouble of making a semi-functional mockup to demonstrate to Atlassian that the product didn't need to be as big and slow: https://jira.ivorreic.com/


Just running atlassian.com on Google Pagespeed shows why they're doing that: 12 from 100. That is really examplaric for other Atlassian products.

And it's really not hard to go over 95 on GPagespeed, just don't use JS fucking everywhere.


I’m enjoying the irony of willful TOS breaking here, but y’all probably shouldn’t be openly declaring yourselves as subject to enforcement actions under accounts with names attached to them, even if Atlassian’s own guidance (see elsethread) suggests they don’t care about your comments.


I thought Jira was slow then our team started using Service Now. Holy that is junk.


Service Now has an incredibly clunky UI in terms of actually finding what you want, but at least in our JIRA Datacenter vs Service Now (it's under a service now domain, so I assume some sort of cloud setup) setups with several hundred thousand issues in each, Service Now is actually pretty fast in the sense of "You click an action and it does it's intended purpose", where JIRA falls down. It's slow only in the "You need 5 actions to get to the thing you want" sense.


Interestingly, ServiceNow is twice the market cap of Atlassian.


IMO their terrible performance is the #1 reason not to use their cloud services, and there's usually nothing you can do about it. The less resources they use for each customers the higher their margins.


The obvious way to deal with this is to write a passive github repo that contains a performance test harness and test plugins that run against Atlassian servers. Make it trivial for any customer to download and use (I'm going to guess that Atlassian has some license terms about API use that could make this tricky).

Simultaneously, tweet results from anonymous twitter, post to hacker news, get picked up in the trade press, make it big enough that it's embarassing for atlassian when a bunch of customers measure latency and realize the product is crap.


I've had similar ideas for shedding some much-need light on other similar performance pain points that large corporations are sweeping under the rug via legal means.


The most innocent explanation is preventing benchmarks against similar tools which could be unflattering. Though the only other company that resorts to such insecure measures is Oracle.


The comment right next to yours is a link to cloudflare's TOS with the exact same provision. It is not just Atlassian and Oracle.


I meant to include "that I know of" but was too lazy to edit. Anyway, it's still a bad sign IMO. Signaling desperation instead of concern for user experience.


way more companies do that than just those two.


From a Tableau terms of service:

https://www.tableau.com/tlc-terms

3(g): "publicly disseminate information regarding the performance of the Tableau Beta Products."

3(f) is also identical, but they otherwise don't look that similar.

They're both > $50B companies, so you'd think they could get their lawyers to use different verbage..


I don't want to sound mean, but before discussing performance issues, can we talk about how completely unusable all of Atlassian's products are? I mean this sincerely. I feel bad saying it because I know there are lots of people probably on this site that worked on them. But I literally often have no idea how to proceed when using these products. That's something that hasn't happened to me since before I was a teenager.

As 2 random examples, I've used both Confluence and Crucible in the last month. In Confluence, when I log in I see everything that anyone in my (1,000+ people) organization worked on most recently. People I've never even heard of show up as having edited some random document that doesn't concern me. Meanwhile, I can find no way to list all articles that I created. There's a small list of things I edited or looked at in the last 30 days, but no way to say, "just show me everything I created."

Meanwhile, in Crucible, I literally can't figure out how to do anything. I'm reading through the changes to some source code and adding comments, and after an hour of doing that, it still says I've completed 0% of my review. WTF? And when I start my own code review, every god damned time, it tells me, "You're about to start a review with no reviewers." It then offers me 2 choices: Abandon the review or cancel. I get what "abandon" does. What does cancel do? Cancel the review? Cancel abandoning the review? Why is there no button right there to add reviewers? That's what I most want to do! (And there are no reviewers yet because it literally has not yet given me the option to add them previously in the work flow. WTF?)

You can talk about performance all you want. I won't bother until the products actually perform some useful function. As of now, as far as I can tell, they don't.


As much as I don't want to defend confluence here, or anywhere...

The front page of your confluence instance's default space was configured by someone to show the all-activity feed.

You should be able to go in to your profile, and set your default space to a more specific and useful 'space', maybe your teams space.

While you're looking at your profile, you should also see a more tailored activity feed.

I can't help you with crucible.


Simple English Translation: We cannot trust any publicly posted claims about product performance, since they have effectively been cherry-picked by the marketing / legal team.

Bad claims can be take down, thus the only remaining claims are good ones.

Cool - can anyone provide a quick list of alternatives?


As a user without a contract I’d like to point out that the performance is universally shitty :)


The specific part of the ToS being referred to:

(i) publicly disseminate information regarding the performance of the Cloud Products


Most if not all users of these applications aren't even in the position to review or accept the terms. When I was subjected to the terms dialog, I asked our legal department what I should do since obviously I am not an agent of the company. They said 'just click accept'. Unbelievable.


So much for SLAs?


We replaced our SLAs with NDAs!


And our DBAs with MBAs!


JIRA IS SLOW.

sue me.


I don't read section 3.3(i) as preventing criticism of performance, rather it prohibits the release performance benchmarks without further permission from Atlassian.

Having been on the receiving end of competitors running 'benchmarks' on a service I worked on, and the trumpeting the very contrived and out of context figures, I can understand why Atlassian is trying to prevent it from happening to them.

Pity that it probably won't work.


If your competition is misrepresenting your product in benchmarks can't you just sue for defamation ?

There's also the option of counter articles




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: