Because the most obvious thing for OpenAI do would be training the voice on Scarlett Johansson's available body of work, from movies, audiobooks, podcasts etc. Why bother hiring a voice actor?
EasyOCR looks like it's more focused on the mobile use case of extract text from photos. That's a little bit different from extracting text from scanned documents, where document structure is an important aspect, and Tesseract is the devil we know. In the commercial space ABBYY Finereader still dominates - https://en.wikipedia.org/wiki/ABBYY_FineReader
Despite the proliferation of cloud services, most large enterprises DO NOT want their sensitive documents entering the cloud. And in some cases, e.g. patient medical records, there are strict regulations about how those documents can be stored, which means on-premise is a requirement.
Good news for us, as that's what we specialise in, but also perplexing how trends in the software industry can completely ignore what customers actually want.
However, the pricing page with no actual numbers and the ambiguous ‘Contact Us’ is a huge turn off.
I cannot stand the dance with business people who want to have a bunch of calls and meetings to know how big a company they’re dealing with is before they decide on a good rate to gouge them.
Pricing pages should be straight forward. Have tiers if you want to cover your rear but only at the limit of usage have the ‘Contact Us’ option.
I’m shopping around for a PDF solution and would’ve recommended this to my manager but I’m not willing to do more meetings to get quotes.
Same. About three years ago we introduced a company wide policy to not buy anything where the price is not known. So, so much time (money) being wasted on figuring out the actual costs, the offering would have to be really inexpensive to make up for this. And if that were the case, the price would be right there.
They usually do high usage volume pricing at high rates that are proportional to the size of the company and make you sign a yearly agreement so they can get a huge payment upfront.
How about building some trust? What if the service sucks? It will be hard to get your money back and you paid a year in advance.
They make you work to get a quote and the quote usually doesn’t work for your needs.
I too will not look at services with this pricing structure anymore unless word of mouth is favorable.
Large enterprises can afford to take things in house and might even save money that way, not to mention the security gains. Medical offices have no choice. However small companies often don't have anyone in IT (other than the CEO who does everything and only rarely knows what he is doing other than the niche the company is in). These should be the prime market for tools like this - just pay us a little bit and we will worry about he details for you - everything is backed up. However if you can get one enterprise account that is a lot more money than thousands of little accounts and so everyone focuses on them anyway.
> Good news for us, as that's what we specialise in, but also perplexing how trends in the software industry can completely ignore what customers actually want.
I initially read this backwards and thought you were lamenting that people insist on on-prem stuff when cloud is clearly The Right Thing.
I certainly don't think the entire software industry is ignoring what customers actually want. Case in point, you. But also lots of other developers who thrive in covering the myriad use cases the myopic behemoths can't see. They just have very loud PR and marketing and pretend those cases don't exist, so you hear about them a lot.
You seem to think that users want everything in the cloud and that’s what’s causing the proliferation of cloud services. You are wrong. Users want _convenience_. They couldn’t care less about the cloud or technical details. If your website can do what they want to do without uploading their documents to your server then and if it’s faster and cheaper then that’s what they’ll prefer.
It's a fair point. Most of our customers work with CPP, C# and Java in enterprise / back office contexts, which is why no PHP or Javascript right now - we've been tied up with other priorities. That said we just added Python to our main SDK and PHP is coming.
I would think that JS/TS support would be relatively high up... my own bias speaking, but a lot of development and effort to easing cloud apps is JS/TS centric.
I work in a FAANG on stuff that is definitely "enterprise software", a major part of what we develop is written in TypeScript.
I admit PHP will not be as good of a candidate but for smaller companies it is still extremely attractive, and it's probably easier to develop since you can write PHP extension in C.
For me it was my mum turning on the vacuum cleaner would cause a spike that could reboot the Spectrum 48K. The 128 model had a better power supply as I remember. Also the ZX81 I had before that (think I was 8 at the time) was extremely sensitive to the state of the current in the house...
To me Redis has always seemed like a Trojan Horse for developers. The first impression is its this simple key-value database, so easy to use. Oh wait... it's also a cache, nice! Let's cache all the things too! And look... all the cool kids are are using it too, so it must be cool, meanwhile the old Unix mantra of make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features. ( http://www.catb.org/~esr/writings/taoup/html/ch01s06.html ). Fast forward 10 years and you need to download it's Enterprise Whitepaper ( https://redis.com/solutions/use-cases/caching/ ) to make the right caching decisions.
Where this is coming from is having worked on a project where Redis was being used as a database and a cache, on different ports. And of course most of the dev team hadn't read the the manual because Redis "is simple and just works". And of course someone forgot to actually configure the Redis instance that was supposed to be a cache to actually _be_ a cache. And someone else thought the instance that was supposed to be a cache but wasn't was actually a database. And yet another had used TTL caching to solve all their performance issues. And pretty soon mystery bugs start showing up but sadly no one can actually REASON about what the whole thing is doing any more, but there's no time to actually clean up the mess because it's a startup struggling to stay afloat.
And I remember asking "why didn't you memcached for caching?" and the response was "Dude! No one is using memcached any more". So the technical decision for Redis was based on "what's cool right now".
Anyway... I feel a bigger rant brewing so I'll stop here.
I think it's rather features were added to Redis out of the experience and craft, not just to "lure future users into a pit", I doubt antirez would have that in mind.
But I think you described right the social behaviors of certain/common types of users.
Nothing wrong with Memcached but at high loads weird issues will crop up with it too and if you don't have an understanding of how slabs work in Memcached (I doubt your average dev does) you are going to have a hard time reasoning with it as well. Eventually someone will say "why didn't you just use redis for caching?".
Character-level operations are difficult for LLMs. Because of tokenization they don't really "perceive" strings as a list of characters. There are LLMs that ingest bytes, but they are intended to process binary data.
Wow. This actually "just worked" for me as in followed the instructions and got a result. Meanwhile the words "jupyter notebook" I've come to associate with python dependency hell.
To be fair I work as a PM and I rarely get more than about 60 minutes to play around with anything involving code, which has blocked me on getting hands dirty with anything AI related.
as someone who just went through this, the process to getting mixtral running in python did "just work" (pip install the interface, download the model, run the sample)
The process to get it running on the gpu wasn't there yet.
One issue is the way taxes work in Germany that disincentives entering the middle rungs of a career ladder. Like if you’re a software engineer and want to become an engineering manager, you might end up earning less after tax as a manager than you did as an engineer as your salary increase puts you in a new tax bracket.
Otherwise Germanys economy is optimized to support medium to large German companies with export businesses. Little attention has been given to support new business ventures, so for young people becoming an entrepreneur makes little sense when you can earn more and have less stress with a normal job.
full disclosure: this is based on personal experience and observation as a Brit that worked in Germany for a while and ended up in Switzerland
< you might end up earning less after tax as a manager than you did as an engineer as your salary increase puts you in a new tax bracket
That's not how progressive taxation and the tax brackets work. In reality you pay higher tax rate only on the amount above the threshold. So if, say, the tax rate is 20% on the income up to $10000 and 30% above it then if you earn $30000 you pay 20% on the $10000 and 30% only on the remaining $20000. You don't pay 30% on the whole $30000. Why is this misconception is so common?
> you might end up earning less after tax as a manager than you did as an engineer as your salary increase puts you in a new tax bracket.
This is impossible, even with the very high German taxes :-)
You will keep less of each (EDIT: additional) Euro you earn, but the tax progression is not that broken that getting paid more results in less money for you. There are diminishing returns on hours worked, if that is what you meant, but otherwise it's untrue.
The situation with trains (and data) in Switzerland is complicated as each Kanton has it's own rail network. In 2016 the SBB _finally_ started making it's train timetable officially available (some info on that here - https://www.itmagazine.ch/artikel/64746/Open-Data-Plattform_... ) which is I believe what this map uses, after shutting down people scraping the data.
That said, what has always irked me is they gave the data to Google as far back as 2007, while refusing to make anything available for sites like local.ch and map.search.ch (who's map was partly basis for the original Google maps). Refusing may be a leading term but certainly there was no help given to local Swiss companies, while Google already had the train times in maps.
This doesn’t match my experience. Google requires feeds to be in a specific format [1] but the feed is push from the agency. When I was intimate with the details, agencies were strongly encouraged to publish these feeds at a public url. There were all kinds of things about the format itself that might not have worked for the good folks at local.ch but access to the data is not likely to have been a problem.