I refer to these people as the "glue" between two departments.
At big companies, this is mostly a management issue.
Questions like these are common
1.) Is this person part of the design team or engineering team?
2.) Who does this person report to, and how is his sprint planned?
And, as a company grows in size, managing people becomes a more important issue than focusing on minor product details.
Organisation is always a compromise. Rather than seek the perfect structure, it’s better I think to address what makes structure get in the way of doing the right thing. Better structure may follow, but it’s a constant fight. Different challenges call for different people to work together.
We have built something along these lines, Crusher [1] (demo [2]) is an e2e framework, you can trigger test with click of a button. It's built on top of playwright.
We’re thinking of adding watch mode in the coming months - running tests continuously in real-time. Sending you an email to exchange ideas.
P.S.- there might be few typos. It’s work in progress.
A 4 year experienced react dev here. I have had decent time working with different styling
I'll try to answer few common things for everyone's context
Why even use CSS in JS?
- SPA bundling usually loads all CSS at once and all styles collide. You need to be super good at naming stuff/or load CSS based on module. So your CSS will be conflicting, so scoping is helpful here.
- Not having to jump from your component file to CSS file. Save some context switching or few strokes.
CSS modules solves all of these problems (except the separate file), and also lets you write vanilla CSS, and not have to hack around the limitations of css-in-js. Dynamic styles are easy with React's `css` prop, or simply passing in an additional class name.
In the end, I generally prefer to use tailwind + emotion. My goals usually is to save time and make system more consistent rather than perf gains, which isn't that much on new systems.
Looks like a well designed library (and it is only 40 lines long - but then the original isn't much bigger). But it's hard for me to imagine code where this is a bottleneck.
It depends a lot on the context if that context is React. CSS in JS in retrospect looks like a shim for react developers who couldnt rewrite their code to single-file components like in Vue and later Svelte. You dont have to be a genius to know how to name and structure dom elements, though being good at it lets you reuse css and reduce overall app size better than any bundler ever will - but its just easier to install Tailwind.
Whenever you talk to somebody about how ridiculous it is to need a preprocessor to do basic sane things in css, you hear about how “css isn’t designed for that, it’s better this way. Use a preprocessor!”. Preprocessors are a workaround, not “the way it should be.” There’s no good reason why a @media query can’t take a variable for a min-size value, it just makes things error prone. Yet, the css spec is so proud of itself it has declared that there will never be a css 4
Not just that you can correlate this data with activity on various social networks like Reddit and 4chan or even hacker news over time and slowly narrow down the list of possible anonymous usernames someone may have by just removing all users that were active when individual is known to be sleeping.
Thank you. I'm actively trying to improve latency. To answer your questions:
- This is webp images (where available, else jpeg) demand-streamed to the client over WebSocket binary channel. For webp, they are encoded server-side using cwebp from the original JPEGs.
- The servers in this demo are hosted in GCP us-west3 (Salt Lake City, Utah)
Also, some possible improvements to streaming I intend to look at are:
- stream frames as they are available, rather than when the client requests them (which it does at a small regular interval, or whenever the client performs some action)
- encode the raw PNG frames to h264 with ffmpeg
- use Chrome Extension desktopCapture API (similar to navigator.mediaDevices.getDisplayMedia) with xvfb (no headless) and send the resulting stream either through the server, or p2p using WebRTC
I initially didn't develop it to handle high framerates or high quality (it's more of a debug tool, and delivery system for a web scraping app), but people are requesting this.
MightApp has been in beta for a long time. Handling massive streaming and combining it with interactivity, and making the economics work, is not trivial.
Despite people's requests for better video quality, and my willingness to be responsive to those, I still have my heart firmly set on providing the best experience with the minimum amount of bandwidth and the lowest framerate and lowest quality (as in, resolution) possible. I just think this is more efficient, and will end up being more scalable, and it fits well for my initial web-scraping tool use-case. My biggest fear about this feature is lag to the point of unusability, which happens whenever the bandwidth is larger than the capacity. You get a backlog of inflight frames, and the usability goes to shit because everything takes x seconds to occur, and then you're behind anyway. I'm reminded of that nightmare scape video of people inside an Oculus delay chamber trying to pick up a feather from the floor. Glitch in the matrix.
Love you're using WA and it's Apache2.0. I do get the sense of idea but felt it's not clear on it for whom it's targeted, business or consumer?
If it was for consumer and integrated with existing tool, I would love to try it. Had pain point where even big zip file would quite good amount of time to generate. Not sure if FB, google does that to introduce friction in the process.
On Dev front, why would companies like Google, Uber, Linkedin be willing to adopt this standard?
Today we work with engineering teams who have direct data interoperability and verifiability requirements, such as giving their users the ability to transfer their status on one platform to a partner's platform. Airlines and credit card companies already work together in this way, but these partnerships are currently expensive to setup and coordinate.
We found that the non-JavaScript tooling in the ecosystem was still nascent and wanted to do something about it friendlier to enterprise environments and security teams. We are using DIDKit as a base for adding this functionality to consumer-facing products, and hope others will find it convenient for this as well, so I look forward to giving updates on direct consumer use cases soon.
As per adoption by large tech/enterprises, we believe that as companies consolidate their data into warehouses, they will want (or need) to start sharing with partners, governments, and users in an auditable way. Some have compared Verifiable Credentials to the shipping container for verified data, and I don't think it's too off the mark.
We think these standards could also prove to be very straightforward ways to comply with data interoperability requirements imposed by laws like GDPR and CCPA. There will probably be more requirements in this direction if the US and EU decide to further regulate large tech companies.
It teaches coding to kids of age 6 charging upto 1500$ for 100 clases.
To sum up, this is very high amount from any standar, parents take loan to fund this. Second, they use black hat marketing tactics claiming 10year old student now earns million of dollars.
I'm not sure if there tactics are even legal. Platforms like FB, Google should fact-check on ads.
Seriously how can you believe a kid is earning 10mn$ a year. And has no mention anywhere on the web.
The original post reads "he didn't study at WhiteHatJr". I think it was a satire, as I found it was a pretty funny remark. I believe you are pointing your finger at the wrong person.
We're also to building SAAS produt which is very similiar[1].
Things we are supporting
- No-code test creation. Both playwright-cli and QAwolf supports it.
- Much more control over elements. We're using native chrome extension to achieve it. Integrated ChaiJS on top of it.
- Automated screenshot capturing, video + all debug info (console, network, DOM) when test fails.
We're looking for early beta users. If you're interested in trying it out + pizza send along your way, please fill this form https://bit.ly/2FU2Vc4.
P.S.- We're planning to start beta testing in couple of days.
At big companies, this is mostly a management issue.
Questions like these are common 1.) Is this person part of the design team or engineering team? 2.) Who does this person report to, and how is his sprint planned?
And, as a company grows in size, managing people becomes a more important issue than focusing on minor product details.