Hacker News new | past | comments | ask | show | jobs | submit | jillboyce's comments login

There is an open source AV1 encoder implementation available: https://github.com/AOMediaCodec/SVT-AV1


We are referring the Microsoft Azure Kinect DK depth sensor. https://azure.microsoft.com/en-us/services/kinect-dk/

No, our system doesn't measure object dimensions. It provides the ability to capture a real-world scene and stream it for remote playback in which the viewer has the ability to control navigation around the scene.


+1

We encourage you to shoot some content with a Kinect DK (or iPhone) and upload it to our platform to test it out yourself.


We went with HLS instead of DASH because of easier iPhone integration, but we can certainly offer DASH support if there is sufficient customer demand. Our technical approach works just as well for DASH as for HLS.

Because our approach is built on existing video streaming protocols/servers and video codecs, we think it is a straightforward step to add 3D video streaming to existing 2D video streaming services. As you say, 2D video is everywhere now. We envision a future where 3D video also becomes ubiquitous.

With our system, 180 or 360 cameras can be used. It is up the creator to decide what range of volume to capture, what type of cameras, how many cameras, etc., which determines the range of motion is supported for the viewer.

It is on our roadmap to allow augmentation of real-world 3D video with objects/meshes like in AR, except instead of augmenting your current local scene, you can augment a remote scene (or remote in time scene).

Spatial audio would also be a very useful feature. We are video experts, not audio experts, so would plan to work with a partner to offer support for spatial audio in the future.

Thanks for your comments. It's great to hear what features people are most interested in.


Perhaps a new feature to add to our roadmap? :-)


I agree that we do need some better demo content that can better illustrate the potential of 3D immersive video. We are working on that right now. I encourage anyone who has some interesting content or can make some to contact me.


All of my granted patents are owned by my previous employers, who also paid all expenses involved in submitting the patent applications, and collect any royalties. Some of my patents are essential for video codec standards, including H.264/AVC and HEVC.

It was a change of mindset to apply for my first patent for Vimmerse.

If you are in the US, the USPTO has a good overview here: https://www.uspto.gov/patents/basics/patent-process-overview

Application fees are lower for small entities or micro entities. https://www.uspto.gov/learning-and-resources/fees-and-paymen...

But the biggest cost is paying for a patent attorney to prepare the application. If you want to try to do it on your own, I found this book very helpful: https://www.amazon.com/Invention-Analysis-Claiming-Patent-La...

Good luck!


Thanks for the info about the open sourced UnityJS. I'll take a look.

We hadn't thought about making the 3D video player be scriptable and extensible at runtime, and will give it some thought.

Being able to overlay 3D graphics ( including titles) onto the 3D video is on our roadmap. Glad to hear confirmation that it will be a useful feature to add.


You're welcome, and I'm happy to discuss it further, and point you to the newer code that you can use or pick over for tips and tricks as you desire.

Once you can overlay 3D graphics on 3D video, you'll definitely want runtime scriptability!

Because of its name "UnityJS", sometimes people misinterpret that it's something like Unity's old and now thankfully deprecated "UnityScript", a compiled (not runtime interpreted or JITted) ersatz JavaScript-ish language, that was really just a thin wrapper around the CLR C# APIs, without most of the standard JavaScript APIs, plus it's own weird syntax and object system to make it even more uncanny.

But UnityJS is a totally different approach for a much different purpose, and I wish I could think of a better less confusing and loaded name for it.

Each platform has its own APIs and quirks for efficiently integrating and exchanging JSON messages with its own browser and Unity, in Objective C for iOS, Java for Android, and JavaScript for WebGL.

UnityJS abstracts those platform differences like (and by using) a web browser, so you can write cross-platform JavaScript code, and communicate with Unity via JSON messages, which uses JSON.net to convert back and forth between JSON and C# and Unity objects.

It's better to rely on the build-in JavaScript engine in each platform's standard web browser, than trying to roll your own scripting language from scratch, bundle your own copy of Chrome, or use a less ubiquitous languages than JavaScript (as much as I love Lua and Python).

What's great about that approach is that it lets you use standard development tools: you can live code and debug WkWebView based iOS apps with the desktop Safari developer tools, and Android Chrome component based apps with the desktop Chrome developer tools.

And it works seamlessly with the entire ecosystem of browser based JavaScript libraries. (Which is a relief if you've ever tried to get a Windows C# library to work with Unity's version of C#, let alone trying to port any JavaScript library to UnityScript, which was practically futile).

On iOS, using the standard WkWebView browser as a scripting engine also avoids the infuriating non-technical Dumb Apple App Store Rules Problem, because they prohibit apps that dynamically download and update and interpret any kind of dynamic code, UNLESS you use their browser and JavaScript engine.

Consequently, WkWebKit's JavaScript is the only runtime scripting language you're allowed to use on iOS (it's much better than the old UIWebView because it has a great JIT compiler). Fortunately it works quite well! So be it.


Let's follow up offline.


Why not use javascriptcore?


Interesting question! That's what I was using originally via UIWebView, before I migrated from UIWebView to WkWebView a while ago.

JavaScriptCore runs in-process and consequentially the JIT is disabled, because Apple doesn't allow non-Apple processes to write to executable code memory.

UIWebView's JavaScript engine is JavaScriptCore, so it runs in-process (in YOUR process, not Apple's), but it's deprecated in favor of WkWebView, which runs in another sandboxed process (Safari, blessed by Apple) and communicates with by sending fewer JSON messages instead of requiring many direct two-way procedure calls.

Since WkWebView runs in a separate process fully controlled and trusted by Apple, Apple marks it as being allowed to write to instruction memory, which is necessary in order for the JIT to work.

There is one advantage to UIWebView/JavaScriptCore has that WkWebView doesn't have, and that's that you can use Apple's lovely Objective C / JavaScript bridge stuff to directly and efficiently extend the JavaScript interpreter with your own code, and call back and forth directly between JavaScript and your own code without performing remote procedure calls by sending messages.

That is what NativeScript does, so you can call native iOS APIs directly from JavaScript. (Or at least last I checked, it's been years since I took a close look into studied how it works, but that's pretty fundamental to how it works.) But that means NativeScript's JavaScript code is never going to run as blazingly fast as it would if the JIT were enabled.

https://nativescript.org/

While NativeScript is designed to expose and call native platform APIs like Cocoa directly from JavaScript, UnityJS is designed around asynchronous JSON messaging, with techniques to reduce the number and size of messages, using the same interface across all platforms instead of exposing native APIs.

UnityJS is based on sending fewer larger JSON messages, instead of lots of small procedure calls. For example, with NativeScript you would make a whole bunch of fine granularity calls between JavaScript and native code just to make a window and configure it and put some buttons in it and set each of their colors and positions and constraints and callbacks, while UnityJS would just send one big blob of JSON with multiple nested higher level messages to create and configure many objects at once (Unity prefabs and C# objects instead of Cocoa objects), describing the whole tree of objects you want to create and their callbacks at once, amortizing the cost crossing the JavaScript/native code barrier.

UnityJS also has some clever techniques for reducing the number and size of callback messages: when JavaScript express interest in an event or callback from Unity, you can declare (with a JSON query template with accessor path traversal expressions, kind of like XPath for JSON+C#+Unity objects) exactly which parameters you want sent back with the event when it happens, using "path expressions" that can drill down and pick out just the parameters you need, which get automatically converted to JSON and sent back in the message. So the callback messages you receive contain exactly the parameters you're interested in, and no more. For example, why send the screen coordinates and timestamp and other random properties of a touch event, when you're only interested in the 3d coordinates of a ray projected into the scene and the object id of whatever it hits? Then you don't have to ping-pong back and forth to derive the world coordinates and target object id from the screen coordinates, which would be painfully slow!

These documents describe the issues and APIs and alternatives in more detail:

https://github.com/SimHacker/UnityJS/blob/master/doc/Archite...

https://github.com/SimHacker/UnityJS/blob/master/notes/talk....

https://github.com/SimHacker/UnityJS/blob/master/notes/unity...

Even when it was using JavaScriptCore, UnityJS was just sending json messages back and forth anyway, not making lots of individual calls to native iOS APIs, so it wasn't benefitting from being able to call lots of little different native functions directly, the way NativeScript is designed to be used.

While I do like how NativeScript enables you to extend JavaScriptCore with your own native code an call back and forth between JavaScript and your own code quickly (although there is still a cost to the Objective C / JavaScript bridging, which is a complex piece of plumbing), that kind of bridging isn't well supported on other platforms, and UnityJS aims to be cross platform, and based on JSON messaging instead of calling native APIs. NativeScript is all about calling native APIs, and has its own completely different bridge to integrate the Android JavaScript interpreter with native code.

I just took another look at NativeScript now, to see what they're doing these days, and it looks like they've now got a V8 based JavaScript engine for iOS, but I don't know how they're getting around the "no writable instruction memory" limitation of iOS. Anybody have any insights into that? I didn't think you could ship an iOS app with its own interpreter.

But I don't think shipping a copy of V8 on top of the Unity runtime is a good approach for UnityJS, and it's a huge amount of development work to implement and maintain, so I'd much rather UnityJS just use WkWebKit's out-of-process JavaScript engine via JSON messaging, which is plenty fast, and very well supported by Apple.

Oh, it looks like they're running V8 in "JIT-less mode" on iOS, which is unfortunate if you want to lean into the JavaScript interpreter and write lots of your application in JavaScript, and use off-the-shelf web browser based libraries. WkWebView's isolated JavaScript engine is very stable, well supported by Apple, and efficient, compared to building your own V8 and running it in process without the JIT enabled.

https://blog.nativescript.org/the-new-ios-runtime-powered-by...

>We chose to go with V8 for both platforms for several reasons:

>The version of JavaScriptCore we are using has significantly diverged from the main repository in order to be able to support everything that the framework needs. This makes every upgrade not only challenging as the changes are very often not trivial, but also time-consuming - it can take up to 3 person-months to complete it.

>V8 is embedding friendly and now it supports JIT-less mode which makes it possible to be used on iOS.

>This allows us to provide Bitcode support down the road, which is a blocker for other exciting features like writing Apple Watch applications in NativeScript.

https://blog.nativescript.org/new-v8-based-ios-runtime-is-no...

>The following features are not yet implemented and are not supposed to work:

>Safari Inspector is not supported

>No armv7 architecture

>Multithreading is not working as expected

Obviously they're not going to support the Safari inspector with V8, but I don't know if they plan to support the Chrome remote JavaScript debugging tools. (That would be no easy feat!)

Being able to easily remotely debug the running mobile JavaScript engine with desktop development tools is an important advantage of using WkWebView instead of V8, and something I wouldn't want to give up for UnityJS, since one of its main goals is to greatly accelerate Unity app development and debugging by making it easy to live code and debug interactively with standard well supported tools (i.e. not just the terrible Mono debugger), and unnecessary to recompile the Unity app all the time (which can be glacially slow for complex Unity apps).


No-JIT only applies to script loaded from outside the app/bundle though, and you said yourself, you're only sending stuff back and forth, are you actually loading (remote) script at runtime (where you won't get JIT, but can still execute, like v8 on ios now)

Javascriptcore-on-device is debuggable too!

I personally use native javasriptcore on ios&mac, (which lets me get my engine+app down to under 10mb) javascriptcore (the open source one) for linux (and at one point windows, but I use chakra there). At one point I used v8 on windows & mac, but it's such a PITA to build into the form (dll/static) I want, I just had to give up in favour of native runtimes

There are some annoyances with the difference between apple jscore & open source jscore, but all fixable! (not sure it took me 3 months :)

https://github.com/NewChromantics/PopEngine

hackernews isnt the great for discussion, but if you ever want to chat more, just find my username on twitter/instagram/etc (I don't get to talk about this low level js stuff much, I think the number of people using it outside chromium is under 2 figures :)


Oh, right, hello Graham! Now I recognize your name, which sounded familiar. When I was originally researching these topics, I found your great stuff and took lots of notes!

https://github.com/SimHacker/UnityJS/blob/master/notes/Oculu...

https://github.com/SimHacker/UnityJS/blob/master/notes/PInvo...

https://github.com/SimHacker/UnityJS/blob/master/notes/Skybo...

https://github.com/SimHacker/UnityJS/blob/master/notes/Unity...

https://github.com/SimHacker/UnityJS/blob/master/notes/Surfa...

https://github.com/SimHacker/UnityJS/blob/master/notes/ZeroC...

Now I recall that we even had an email discussion about how to render an Android WebView into a Unity texture and efficiently get GPU textures into Unity, relating to your UnityAndroidVideoTexture and popmovie plugin, back in December 2016 -- thanks for the great advice, and for open sourcing and sharing that code! That was some very tricky stuff to have to figure out on my own.

And of course every platform has its own weird tricky plumbing like that that (which changes over time). So UnityJS is intended to abstract that from the casual programmer, who only needs to deal with good old JavaScript, without mixing in C++, Java, C, and Objective C.

Another great source to learning the best way of doing tricky low level platform dependent plumbing things like that is the Chrome and Firefox browser source code. Much better than reading through old out-of-date blog postings that don't even apply any more. Those huge projects have extremely talented people from the respective companies working directly on them, who deeply know (or even implemented) all the latest poorly- or un-documented APIs.

Reading that code is like drinking from a firehose, and it can be overwhelming finding the actual four lines of magic code that does the tricky stuff, but you know it has to be in there somewhere, so don't give up!

A good way of figuring out where to start and finding a pointer into the right part of the monolithic source code is to look for related bugs and issues and pr's that discuss and point into the code, and search the bug database for names of important and unique API functions and symbols.

For example, here are some notes I took on Apple's "IOSurface", which is kind of like the Apple equivalent of Android GL_TEXTURE_EXTERNAL_OES textures:

https://github.com/SimHacker/UnityJS/blob/master/notes/IOSur...


https://www.linkedin.com/in/jillboyce/

Lots of documentation available on the public Joint Video Experts Team (joint group between MPEG and ITU-T) website https://jvet-experts.org/ about the activities I led in 360 video.

Here is an example: https://jvet-experts.org/doc_end_user/current_document.php?i...


No relation to vim. Yes, Vimmerse = V + immerse. V is for video or volumetric or virtual...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: