> In my case, I have a superior who administers Sonar and is, let’s say, completely committed to it. For any ‘exception granted’ we would have to book time with them days in advance then white-board the reason why Sonar is wrong, or produce a sample program - who has got time for that with tight deadlines?
This superior (sic) is what a negative productivity employee looks like.
We frequently have the issue of, upon refactoring code in such a way that involves moving it and its tests to a new file, Sonar will take away our previous "credit" for code coverage percentage, dropping our project below the threshold and failing.
The only workaround I've found is to create a new function, fill it full of many useless no-op lines, and write a test for that function, just to bump the percentages back up. This is often harder than it sounds, because the linter will block many types of useless no-op code. We then remove the code as part of another ticket.
Heh, at one place I wrote some java code that would use reflections to test the getter/setters in a POJO so that it wouldn't end up with 0% code coverage.
Those sux when maintaining, investigating and debugging code. They break IDE functionality too. You can't put breakpoints in them. You don't see them properly in views.
It is trading off useless aesthetics over practicality.
IDE auto generation is frozen in time. You have a bunch of useless boilerplate you still need to look at to make sure nobody did something unexpected in them
When annotation generated or reflection based, you only have the really interesting ones in source code to care about
The root issue is the superior not having a clue. Sonar in this case, is sadly enabling this type of superior to be even more harmful, not to the developers only, but to the actual business of the company.
And Sonar is far from being alone in this. JIRA is the most glaring example I can think of. Growing companies implement cargo-culted tools without understanding the needs and requirements, and let themselves drift into templates or "best practices" that are not relevant or beneficial to their own operations as-is, resulting in a sum of frustrations, whose impact on the work and the teams they acknowledge only way too late.
The care you need to inject not only in your tools, but how they are apprehended by both your customers and their primary users (which may have very different, if not opposed, perspectives on how/why to use it), from pricing, to documentation, to use-cases...
This is especially very complex when your tool answers to a regulation requirement, because it's very often received as a constraining/oppressing "solution", rather than an enabling one: it may be confortable to you as a seller, and confortable to your customer, but it may also be a counter-sale point to your (customer's) users that will impact future consideration when they become purchasing agents themselves.
Yes... but how often have you experienced a non-clueless linter?
Some tools bias people into doing bad things. It's not exactly the tools fault, and they may even have good uses (like Bash linters), but tools guide people and it's good to remind people not to follow.
>SonarQube (formerly Sonar) is an open-source platform developed by SonarSource for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs and code smells on 29 programming languages.
No matter the industry, it sucks to have someone managing by dashboard. In classic post modernism, the signifier replaces the signified. If anything, it’s a great signal to the employee that it’s time to start looking elsewhere.
A hyperreal workplace is one where representations of reality take precedence over reality itself, and the reality of a situation ceases to have meaning. i.e. One where people chase metrics for the sake of metrics, instead of understanding that issue count is supposed to reflect the underlying code quality, and code quality should always take priority over the representation.
The broader philosophical context is relevant because it shows the broader cultural problem instead of assuming the issue is limited to a single tool.
> I don't get why people hate on post-structuralism.
Because, rather than recognize that overdoing abstractions is a thing and reminding people that there is always reality out there that won't bend to your wishes, post-modernism (which is the term the GP used) tells people that there is no reality out there, it's all just human-created abstractions, and anyone who tries to push back because reality is just insufficiently post-modernist.
That's typically _not_ what post-modernism is, that's at best a misunderstanding, but more likely a strawman.
The only point on which all post-modernists agree is a refutation of meta-narratives (and to be explicit: "here is no reality out there, it's all just human-created abstractions" is a meta-narrative).
A very, very charitable interpretation is that you are maybe conflating it with Frankfurt School of critical theory, because it is also somewhat used/based on psychoanalysis (sigh) (latest postmodernists use psychoanalysis way less, also post-modernism isn't built on it, contrary to critical theory). Postmodernism is mostly post-marxism though, while critical theory is mostly neo-marxist imho (also, i don't want to be too critical of Frankfurt's school, i think most of their bad rep is caused by bad vulgarization/pop-science, most critics i read don't seems to understand why it's wrong either).
I do think that the reason most people conflate the two is because of an idiotic canadian psychoanalyst who can't read (or at least, can't understand what he read), who _clearly_ has no degree in literature or philosophy, and try to appear smarter than he is. He invent citations of the books, and sometime state that Derrida mean something when Derrida hismself wrote the opposite. 8th grader would do better and their reading comprehension assignment. He is wrong. Read and think by yourself.
It's not what post-modernists typically say post-modernism is. But I'm not relying on what they say it is. I've read enough post-modernism to form my own opinion.
> It's not what post-modernists typically say post-modernism is
Worse, it's the total opposite of one core tenet of postmodernism. It's very hard for anybody honest to argue that postmodernism has ametanarrative, when the only thing all authors agree with is that metanarratives are to be recognized and refuted.
> I've read enough post-modernism to form my own opinion.
Who did you read? Deleuze? Derrida? Foucault? Baudrillard?
I will always advise people to read Baudrillard first, I think he is somehow misunderstood in the Anglo world, but it might be mistranslations. Then Deleuze, then Lyotard and Foucault as Derrida is too dispersed imho, and way to complex, as imho you have to read his articles where he explain his books, alongside his books [edit: and to be fair I still don't think I really get Derrida, he's very recognized but to me he is very obscure, probably the weakest imho. I also disagree with a lot I understand from him, except his method, so that's might be my priors who prevents me to really getting it].
If anyone would rather read a novel to try to grasp what postmodernism is about, I think "l'Amour" from begaudeau is the latest (100 pages, really short and sweet), and the one that is still in my mind when I think about postmodern materialism.
Where did you get the "there is no reality out there, it's all just human-created abstraction"? Which book exactly? Because I can't remember a single one claiming that. Maybe 'Simulacre et simulation' has a point that's close to this, but it's about consumption and signifiers, and certainly not about 'all'.
I don't think I remember anything from Lyotard about that, he talk mostly about legitimation and fragmentation. Weirdly, the only place I could see him talking about 'all just human-created abstraction' is in his criticism of metanarative (and in this case, replace 'all' by 'metanarative' and it's pretty close, but also very obvious).
Foucault isn't a postmodernist (he predate them), but he might have talked about that, I remember him talking about objective truth not existing, but it was in the context of words' meanings, and while a Canadian dumbshit took his words outside of its context, it's quite clear it's not about the physical world, but about meaning. Said Canadian proved him somewhat right by misquoting him on purpose. Also it's clear he lacked Wittgenstein's insights on how we describe reality , I would have loved to read Foucault with analytical philosophy insights.
Derrida, I don't know tbh, I don't think anybody but himself understands him, but I do not remember him writing about anything but art or philosophy.
> It's not what post-modernists typically say post-modernism is. But I'm not relying on what they say it is. I've read enough post-modernism to form my own opinion.
You are relying what someone who earns money from making people angry about postmodernism say it is. Outrage culture and addiction.
Generally when you want to know what postmodernism is, you should read what postmodernists say. And if you want to know what nazism is, you should include readings of nazists.
Which is exactly what I did, as I explicitly said in my post. I just don't rely on what they say postmodernism is, as authoritative about what postmodernism actually is. Reading what people say is not the same as accepting what they say at face value.
You need to learn to game it. People care about this stuff are usually stupid. When I led a large support organization I had an SVP who really cared about open incident duration.
His pattern for giving a fuck was predictable. My strategy was to hold certain tickets in an undead state (not impacting the metric), then reopen them and close them, demonstrating a metric improvement.
He got his improvement and big shot street cred, users weren’t impacted, and I didn’t have to ruin support to try to grind out small gains.
It's less stupidity and more "if we don't prove that we meet x control in y compliance framework, our auditors will deem us non-compliant and we will be fined into oblivion"
I think rather than calling it post structuralism or hyper reality, we can just look at goodharts law[0] for the general understanding that metrics cease to be useful when they become the thing that leadership looks at. Like Spooky23 and others are saying in other comments, once you learn that it’s all a game you can use it to your advantage.
Whenever I want to feel completely stupid, I just open a Wikipedia page on some philosophical term/idea and go down a rabbit hole of links that it's a concept in/argument against/etc.
Just completely impenetrably baffling to me in a way that other fields like chemistry or microbiology or physics or whatever (despite also not being my own) aren't. Not that I understand them, but they're penetrable, I can read more and more and form some kind of understanding.
Is it just me? I don't know what it is, can it really be as simple as philosophy not being taught at school (compulsorily, or young) so I don't have that kind of rough overview of the landscape I do for other broad subjects? (I did take one course in 'contemporary philosophy' at university, which I enjoyed, but we covered only what we covered I suppose - I might be able to hold a (very) basic conversation about Sartre or Wittgenstein, but that page on post-structuralism.. no idea!)
It can be a grand number of things, but lack of the highschool curriculum is unlikely to be one of them, in my opinion. Rest assured, you're far from the only one struggling, I'd say most people I've encountered in academic philosophy tend to struggle a tad more with it than (their) other fields of studies. It is what it is, really, it sort of comes with the territory.
Of course there are just some that are less accessible than others, due to writing style or size of their philosophical project (Hegel is an example of both of those qualities). A lot of French philosophy since the second world war, Baudrillard being no exception, is generally characterized as such as well amongst the anglo audience, although I don't think that this is entirely fair.
I'd say the best thing you can do is never attempt to understand it through Wikipedia, but pick up a full book instead and read it a second time if the argument doesn't make sense the first time. Of course there are some authors I would avoid as a beginner, but someone like Kant is fine for even your first philosopher, and is amongst the biggest names in modern philosophy. Prolegomena and Critique of Pure Reason are two books of his about the same thing written in two opposite ways, the former from easy to difficult, the latter vice versa, I always recommend those.
Sartre and Wittgenstein are both somewhat odd for a contemporary philosophy course. I'm curious why they chose that arrangement. Nevertheless, being able to hold a conversation about either of them is already quite solid, plus you get three philosophers for the price of two! :)
- https://philpapers.org/archive/KAMCYB.pdf which made me realize an intuition i had sonce reading the first book (i think the writing is better and clearer, but it might be because the author isn't USian/english and doesn't try to much)
More than likely it is "just you", the page on post-structuralism seems clear enough to me (as far as I can tell, the post-structuralists are criticising structuralism because they don't think the structures are sufficiently powerful). What that means in detail is unclear - understanding a position and thinking it is obviously silly is a completely valid stance when dealing with philosophers. Or just having no interest in the questions philosophers often ask (for example, if structuralism seems to be fundamentally invalid then bothering to take a post-structuralist stance to criticise it it requires a certain type of pedantic and argumentative mind). Or the easy explanation which is misunderstanding [0].
I liked the Barthes example on the post-structuralist page - if a text's author doesn't necessarily have the authority to assert the meaning of a text, then the idea that they text is necessarily part of some identifiable structure is open to question. I assume that means that the same text might fit into multiple contexts with different meanings and trying to fit it with one static meaning based on its initial context is doomed, and that suggests structuralist critique is either insufficient or overly reductive.
[0] Although arguably all of philosophy is people misunderstanding each other; otherwise it may as well be a settled field.
Ah, sorry. The intended parsing turned out badly. That is meant to be "More than likely it is "just you". What that means in detail is unclear." Ie, what it means for it to be "just him" is not clear.
In hindsight I like the irony that the statement was also unclear, for all that it doesn't do my comment any favours.
One is basically stochastic parottism, and the other is dealing with reality.
Philosophy has no end-game or practical applications. You can make anything up, and so long as enough souls latch onto it via pattern recognition, you have achieved memetic reproduction.
With hard sciences, you can talk all you want, but if your hypotheses are consistently disproven, only the untrained and deranged will latch onto your ideas.
There is nothing to penetrate in philosophy. It's not a reflection of reality, but a reflection of the people it captivates.
It's very little different than music, or any other sort of entertainment. Dare I call it an art. In that case, I would say its recent interpretations are lacking.
A personal aside: much of this era's approach to philosophy reminds me of Fabianism -- wretched, cowardly, and completely superfluous to living an integrated life.
I'm not sure about every industry but it becomes especially tricky with software. I worked at a manufacturing plant and they put dashboards on monitors everywhere so the whole plant was always aware of their performance. This was easy to measure because they made physical goods! How do you measure output and performance in software?
Sonar doesn't seem to really work in my limited experience. It adds a lot of of time to builds, at least in the cases I've seen, while there are alternate linters or code quality tools capable of doing the same at a fraction of the time. Build times and development speed matter!... They matter a lot. You need a quick feedback loop.
I use Sonar all the time, but not during build. It runs live while I'm editing a file. I've not noticed any slowdown at all, and it's certainly a quick feedback loop (it runs when I save the file).
I've found the majority of its suggestions helpful, and the ones that are not I simply ignore.
It adds about 2 minutes to our gitlab pipelines but the major issue with it is when organizations decide failures should prevent merging code to master or even deploying to a QA environment.
That's the real time sink - figuring out how to get past it. It's a lot more than 2 minutes, sometimes even days if it's something you can't work around and have to go through the red tape if your team isn't empowered to take charge of your own pipelines.
I should have mentioned that I was referring actually to the continuous integration pipeline, not actually to the build itself. Not very well explained on my side. I've never used it locally myself. I don't really know why the CI setups that were using Sonar I've seen in the past were that slow, to be honest.
This sounds truly hellish, like being controlled by a stupid robot straight out of Kafka's "The Trial." They should allow special one off exception that are documented with a comment, similar to how you can disable Ruff warnings in python code for a single like
I'm not sure how much value sonar adds where I work (dotnet). It enormously affects build times, and I've yet to experience a single true positive in 2 years (apart from the code coverage dashboard). The amount of MRR you can generate by vaguely being related to mitigating vulnerabilities is incredible.
I worked at a place that was full of junior contractors who had a large incentive to ship and no incentives for support. Sonar was good about finding bugs that they should have fixed but didn’t give a rip about (e.g. not closing database connections on all paths)
Any sort of static analysis/linting tools will occasionally make bad/unhelpful/stupid suggestions, some moreso than others. At any place where I've had to use such tools, I've always had the ability to either tweak the settings, or have a conversation with the people who decide them and make appropriate changes. In this case, it sounds like bad culture and/or bad colleagues are driving this person to despair, and not Sonar as per se.
Well, we built Trunk Check to address some of these issues. Maybe it'll suit your org better.
- We support hold-the-line: we only lint on diffs so you can refactor as you go. Gradual adoption.
- Use existing configs: use standard OSS tools you know. Trunk Check runs them with standardized rules and output format.
- Better config management: define config within each repo and still let you do shared configs across the org by defining your own plugin repos.
- Better ignores: You can define line and project level ignores in the repo
- Still have nightly reporting: We do let you run nightly on all changes and report them to track code base health and catch high-risk vulnerabilities and issues. There's a web app to view everything.
I sympathize with the OP. Having said that, I’ve rolled out SonarCloud to two different companies and I would not hesitate to roll it out to a third if given the opportunity.
Initially, people always come out of the woodwork insisting that the gate requirements must be hard blockers and that we can just hand wave away the issues OP listed by tweaking the project rules. I always fight them, insisting that teams should be the owners and to gain quick adoption it should just be considered as another tool for PR reviewers. Eventually, people back off and come to accept that Sonar can be really helpful, but at the end of the day the developers should be trusted to make the right call for the situation. It’s not like we aren’t still requiring code reviews. I feel for OP, but it’s not Sonar’s fault the tool is being used for evil instead of good.
This last time I implemented SonarCloud, I took an anonymous survey to get peoples opinion. For the most part people liked the feedback Sonar provided. More junior engineers and more senior engineers liked it the most- midlevel engineers not so much. The junior liked getting quick feedback prior to asking for code reviews. The more senior engineers - who spend a lot of their time doing PR reviews - liked that it handled more of the generic stuff so that they could focus more on the business logic or other aspects of the PR. It’s just another tool in the toolbox.
There is no anonymous survey with the metadata available to an organization as a matter of routine. Between your OPS people, and just local onowledge, unmasking can be a trivial affair.
I saw a case where Sonar analysis was being requested by a government agency where software was built by consultants. From the government agency's point of view it made some sense to ensure that the code delivered by the consultants wasn't full-on spaghetti.
However, I saw it causing similar turd polishing behaviour: Sensible code needing to be changed because it exceeded some obstinate metric, any kind of code movement causing existing issues to appear as "new", false positives due to incomplete language feature support, etc.
Isn't the actual issue here the superior managing the sonar being a controlling jerk? Turning off the rules on sonar is easy, technically. The issue is social.
This superior (sic) is what a negative productivity employee looks like.