As someone with only 1 eye, this is quite suited to me! I am actually sad that the previous crop of "low cost, slim, not earth shattering" AR glasses all died out. A camera built into one stem and some simple text displayed for one eye would suffice for a ton of use cases.
A killer app for me would be floating name tags above people's heads, but I imagine that would require better HW than what is available here.
I'd love to see what this looks like actually installed on glasses, in different configurations.
Some simple AR use cases that I'd like to see: ]
* Turn by turn directions
* Incoming high priority notifications
* Weather alerts
* Front door notifications
* Upcoming meetings
* Interior building navigation
* Shopping Lists (hard to do the interactions on this)
The issue here is none of these are a killer feature, so I wouldn't be willing to pay more than a small premium to have them added on top of what I normally pay for frames, but since I already am paying $300-$500 for frames (before insurance), if someone offers the above for another $100 or $150, sure why not.
> Don't use the camera or microphone to cross-reference and immediately present personal information identifying anyone other than the user, including use cases such as facial recognition and voice print. Glassware that do this will not be approved at this time.
> > A killer app for me would be floating name tags above people's heads, but I imagine that would require better HW than what is available here.
Which is too bad, since I see AR as potentially being a massively useful device for those with memory issues, dementia, or who generally need some help with things like this.
I'd have given anything for that 35 years ago. First thing on any contract job, I'd make a map of desks & names - but it always tripped me up when people slipped over to work at a different desk!
The Google Glass backlash (which may well have happened to any AR glasses) has set back practical use of AR in private life so much. It's frustrating, especially because of the splash damage it's had.
What's particularly odd to me is that many people seem to be sensitive about, specifically, cameras in glasses. I sometimes literally wear a bodycam, not subtle at all, and it's mostly ignored, while with no-camera HUD glasses people get nervous that there's a camera there.
I hope that we can eventually get away with stuff like real-time facial ID as an adaptive technology feature - though the only way that will work is if we can get it pretty much invisible. Or we finally get used to all being recorded all the time anyway and stop freaking out about it.
the difference is if the camera is obvious then you at least know you're being recorded. playing a guessing game 24/7 is exhausting. being recorded once in awhile not such a big deal. having to watch every word and every action you do all the time? not nice.
This is a case where people's expectations really need to catch up. We're being recorded in so many ways in so many situations that "helping someone remember your name" is really the least of your concerns.
Unfortunately, that's also why "auto-name-tagging" is never going to happen. It doesn't help the gigantocorps who are already tracking us through wifi and IR and seismic patterns in public places. They've had data that could make the individual experience immeasurably better for nearly 15 years now, but they conveniently use "privacy concerns" to avoid having to acknowledge it in public. As long as there is a modicum of suspension of disbelief, they can continue to collect all the data they want for their own purposes and to hell with anyone who wants to use their own data for their own purposes.
Like, I could conceivably take photos of people I meet, store those photos along with notes in the iOS notes app (or even a contact app) then create a panel of those photos so that I can tap the right person and bring up those notes when I see them again. AR would simply make this easier and faster. Unless the AR camera was hidden it would not make it different from a privacy perspective.
To make things more confusing, both the iOS photos app and Google Photos automatically find faces in your photos and the other photos containing them and let you easily tag each face with the correct name. The technology to do this already exists.
Exactly what I was thinking.. Have the feature offline but that would require storage I suppose. Also, the tagging in photo apps were another aspect I thought of. It is ok to tag in one instance but not another.
That's the thing. The privacy implications are different at scale.
A world of FOSS tools, where centralized ID databases are heavily restricted or banned, and people are using facial recognition technology on people they encounter in their lives and tag, that would be fairly harmless, perhaps even doing much more good than harm.
But it's not going to play out like that. How many of us run speech transcription software on our own machines, and how many of us use a remote service?
> Google Glass explicitly bans this type of feature:
But that's, like, 90% of what I'd want something like that for. Not for everyone in my field of view, of course, but when I'm out in the field I meet a lot of people in a short time and I can never remember everyone's names. Work shirts with embroidered names are an absolute godsend.
I'd love something like that too, but google glass wasn't designed for you or your needs. It was designed to plaster ads over everything you see while continuously collecting data on everything you point your face at. That's the problem with technology today. However useful it could be for you, it isn't really created for your benefit. Instead every new technology today is used against you, even when it does allow you to do something you want with it.
Self driving cars will track everywhere you go and when and ultimately decide where you'll be allowed to go and when, augmented reality will make it so you can't see anything without an ad playing, robots in your home will record you and catalog everything you own, the phone in your pocket already tracks you and everyone you care about, etc. It's hard to think of a single popular tech product that isn't collecting data and working for someone other than the "owner" who paid to bring it into their home.
Even this open source AR project is guilty of it. Their privacy policy says they'll collect your personal data to use for marketing. Any tool or tech that requires an account and/or requires connections to someone else's servers is likely going to be used against you, but I hope that someday we get a fully self-hosted, open source product that runs on trustworthy hardware so we can do things like keep track of people's names, record our experiences, and shape the way we see the world without putting ourselves and those around us at risk.
>This is a societal issue, more than a HW/SW issue. Google Glass explicitly bans this type of feature:
Seems like one of those things that can't be stopped. It can be slowed, as Google has done, but eventually OSS will allow for it. If it is massively outlawed then it can be reduced, but even then it'll still be on the level of hidden cameras.
> I wouldn't expect any "official" facial recognition on any of these upcoming AR devices any time soon.
I would, just not for the general public. This would be very helpful for police to immediately do a check on people around them and spot people with active warrants.
By that logic, why not just put cameras everywhere, use image recognition tech to do all the identification and enforcement, from traffic violations to active warrants, China-style?
Pretty wild for congress to raise privacy concerns, considering the ungodly amount of data they harvest every day on both their own citizens and those of other countries across a variety of mediums, agencies, and subsidiaries...
gotta point out that explicitly banning putting names over people is explictly excluding a very useful accessibility accommodation for folks with any degree of faceblindness.
It can 100% be done in a non-sketchy way by simply limiting the names to the names you've manually entered for people you've seen and tagged before.
folks need to stop ignoring disabled people and / or going "yeah well who cares about disabled people's needs".
maybe open source will help with that. maybe some nobody will make those features and you can sideload and it'll be quiet enough that no one will know for awhile.
As a father of a kid with 1,5 eye -- my daughter's left eye stopped developing at young age so we missed the window of opportunity to have it corrected on time, so one eye does most of the heavy lifting for her, and the other eye can only see a blur -- your comment made me think of possible use cases for the monocle for her. Maybe notifications using colors and shapes that could be perceived by the underdeveloped eye. I believe the signals from the monocle would not interfere in any way with the good eye, so from a brain's perspective, both things could coexist independently. And, as you suggested, it could well be a "clip-on" addition to the frame itself. That would be nice.
I think these are actually kind of lame use cases. They can almost entirely be replaced with a smart watch.
Where this device has a unique advantage is overlay AR. Turn by turn is somewhat there but I think a better way to look at this is passive information. You could walk by a business and see hours on a virtual sign. You could make an app to leave virtual art you could stumble on. I'd like to see more ideas like that. Just a new way to show notifications doesn't cut it for me.
> You could walk by a business and see hours on a virtual sign.
Most stores have those posted already. :-D
> They can almost entirely be replaced with a smart watch.
Agreed, unfortunately current smart watches are tasked with doing a lot, and as a result they are not very good at blending into the background of life.
I'd also love a life logger, being able to see snapshots of my day and quickly create journal entries. An automatic record of everyone I talked to, summaries of important conversations.
Human potential could be greatly expanded with current day technology but sadly we are limiting ourselves.
Or, hear me out.. it could be _targeted advertisements._ /s
Seriously though, I'd love to have a wearable adblock. Detect anything that resembles an ad, blank it out. Obviously not while you're doing anything safety critical, but most of the time - good trade.
Fix practically anything that instructions have been made for, unscrew here, unclip that, pull out that, replace this part; do it all backwards to put back together ... with visual overlays you're acting as a robot almost (that's good and bad).
Also, anything the AR can see you put down/drop can be remembered; anything you're looking for can be highlighted (in theory).
Yes DIY is such a killer use case for this stuff, and what an awesome way to learn, imagine working on your car or anything really with an expert mechanic who can see exactly what your seeing highlighting where to turn your wrench or tell you that your technique is incorrect, or diagnose that weird sound you are hearing.
>Shopping Lists (hard to do the interactions on this)
with a companion app, you could scan the barcodes of the items as you place them in the cart, and then remove the item from the visible list in the AR HUD. shit, tie that into the actual store's check out system so you don't have to use their stupid POS self-checkout lines. just bag it up and be on your way. other than the products that are sold by weight, this seems like something we're missing out on as a species.
Hell, there are like two dozen fully autonomous Żabka convenience stores in Poland that do this. Unlocks the door for entry, identifies the items you pick up by CV, pays for them automatically without interaction, and unlocks the door for exit. No app or phone necessary, no cashier or staff, no checkout. You just pick the thing up and leave.
Im blind in one eye also, and was thinking the same. But would certainly preffer the lenss to be a litle smaller. But for a first version, seens to have some potential.
> Turn by turn directions
> Interior building navigation
Disclosure: I worked at one of the companies spending big on AR and worked on these exact use cases. Sadly, the company was not excited by these cases in the short term, so the projects were effectively cancelled. I fully expect them to be revived at some point, but not any time soon.
I wouldn't touch any kind of AR use case while driving. Seems ripe for lawsuits when you're obscuring a driver's view (especially with an eyepiece compared to putting a HUD on the windshield)
> I wouldn't touch any kind of AR use case while driving. Seems ripe for lawsuits when you're obscuring a driver's view (especially with an eyepiece compared to putting a HUD on the windshield)
Cities, walking. :)
Walking around a city pulling out my phone to check directions is actively stupid, a HUD with arrows telling me what intersection to go down would improve safety on multiple fronts. (Theft and not situational awareness).
Yes, teenagers would obviously use it responsibly and not overload the view with distractions. /s
Also, there is an ongoing trend to ban mobiles for pedestrians crossing the streets (Honolulu, followed by Jiaxing, Lithuania, Poland, ...). It would be great if AR was regulated from the start to disappear in such semi-risky situations.
You might be right that it is ripe for lawsuits, but a properly designed low distraction overlay seems much better than what most people currently do, which is alternate between looking at their phone and the road.
The easiest way for that is to have the user set that position to account for the car model, driver height, and seat position, but you're not likely to find a way to pick a good safe spot to show it without asking the user.
But if you say this is safe to use while driving, you need to handle situations where "this AR app had a bug which drew a solid color over the entire view, obstructing my view, and caused me to miss the pedestrian", while a projected HUD has a limited area it can draw on.
If you can't/won't say it's safe to use while driving, then why develop the feature for driving?
This being an eye piece which has software control over your entire field of view has potential for danger.
I've looked around their website and the links people have commented here, but I've yet to see a video or image of the device actually working. I know it's hard to demo AR devices, but a simple video of a phone camera pointing through the device to show that there is actually content displaying would add much more credibility. Especially for $350 I need to see some sort of product demonstration.
It's a 5MP camera for still photos, limited to 720p for video--although the data sheet says it can do 1080p as well. Perhaps that's bottlenecked elsewhere in the system.
10x reduction from raw to jpg is not what worries me (provided decent-ish computing power) as much as that this is a theoretical maximum under lab conditions. When has any wireless technology ever delivered?
When 802.11n 300mbps was the hot thing that made LAN obsolete, you'd be happy to get 60 megabits (which is still plenty for most purposes, don't get me wrong, but it isn't 300). Bluetooth is more resilient due to spectrum hopping that I wish newer WiFis had taken over (no more channel madness where you either have to coordinate with 3d neighbors or, on auto, might have good and bad days; everything just uses the whole spectrum as efficiently as possible), but I still doubt you'll have one fps stable.
720p claim may be specific to the video recording features while the stills capture could be at the higher resolution. or they could just be over-sampling to help reduce with noise by down-scaling the larger resolution to the smaller.
It seems to me that unlike HoloLens and such, it's not trying to be seamless. It's almost entirely obstructing one eye with a small screen. They use a translucent case so you can sorta see around the screen, but not very well. Kinda butts up against what the actual promise of AR was supposed to be.
It looks like the website does show a "through the lens" still photo as a part of a chat/endorsement bottom left hand side of the front page. "yeah super excited that our devkit has launched. here's me testing replay."
Does not work with Hacker's Keyboard (https://github.com/klausw/hackerskeyboard). It closes itself after a few deciseconds, whereas usually the permanent notification feature can be tapped to open and use a keyboard anywhere. Or maybe I haven't tried using it on the new Android 11 yet and yet another of my favorite hacks broke.... now that I try it elsewhere, it seems this app got broken by google breaking userspace again... Never mind, looks like I'll have to go and fetch a USB keyboard for this in the future :/
Is there a picture of this wearable being worn by someone, preferably with a resolution greater than 480x320 pixels at a distance where the torso is in view as well?
I'm not uninterested, but at this price point, selling a ~400 euro device that I didn't know existed until 30 seconds ago takes a bit more convincing, especially with a gif showing 1.5x the original size and calling it 8x zoom.
Seriously, the only two images I could find are 500x333 (the monocle comes in at 61x66 pixels if I'm being generous with the edges) where the person literally has to hold it up https://uploads-ssl.webflow.com/623cc6cc56889b045032bfc1/62a... and 319x306 (crops to 20x20 pixels(!)) https://uploads-ssl.webflow.com/623cc6cc56889b045032bfc1/637... where there's again a hand hovering. It looks stable nor comfortable, to me, but I don't know, it's not like there's a video with someone casually wearing it while talking to the camera or doing something.
Open source being listed as something you can toggle off is also a bit ironic to this FOSS advocate :-) but the spirit is there and it's definitely a selling point that it says the platform is under an MIT license! Let's get that app on f-droid and I'm curious to see it actually in any user's proverbial hands!
Yep. Price tag too high for basically a gimmick. I'm not seeing how a 70 mAh battery can do meaninful activities with so many power-draining modules around. Might last 30 minutes, 60 at most without needed to recharge again.
Would prefer the clip-on module for existing glasses and get extra battery life, perhaps more vision angle since the monocle seems to cut 30% of the vision field.
To be fair, I feel like the 16x frame looks more like 2x than 1.5x, but yeah someone ought to do the math there and be honest about what you're going to be seeing.
/me realizes something
Wait, the camera is advertised as being 720p. No mention of optical zoom (and smartphones (which are bigger and heavier!) don't have that either). I'm allowed to drive, so I'd say my eyesight ought to be better than 720p (at least after the brain's post-processing, as eyeballs are allegedly pretty terrible). In that case, "zooming" cannot help to see more. If this crops 720 pixels by a factor of 16, you get 45 pixels blown up to the whole screen. Plus motion blur. I can understand they didn't show what 16x would look like!
What a scam of a feature, neither being 16x nor being useful at all nor being zoom by any of merriam webster's definitions. Either that or I am majorly missing something here.
This thing doesn't have nearly enough power nor battery on board to run even the smallest stable diffusion model at all, not even talking about the nightmare that would be doing it in realtime.
If you want, you could do it offboard but there's a reason wireless VR headsets don't work, and it's latency.
I want an open source monochrome vector display delivered by laser to my retinas directly. I would slambuy such a thing and dedicated the rest of my life to making the best pocketable chorded keyboard.
I, like many of you, am very good at telling a computer what to do with letters and symbols. I spend a lot of time in voice calls and I often write little scripts while I'm talking to answer questions, ballpark numbers etc. That's not to mention the power of tools like Wolfram Alpha and Google.
I would like that to be everpresent in my life. I know we'll get some semblance of this with conversational AI at some point soon, but for me I crave the determinism of real programming. I want to be able to summon a quake-style REPL from the sky at will, and while looking someone face to face google facts or compute probabilities. I want to be able to sketch algorithms while walking around in nature.
(Intel was working on something like this at one point but sadly the project was abandoned)
I remember that Intel project - Vaunt. They sold the patents to North which was acquired by Google. According to this reddit thread, most of the team went to Magic Leap afterwards
I knew some people at a university lab which had developed exactly this — back in the 90s or 00s. I think the problems it had are mostly solved since then (lightweight compute/fast digital radio/etc).
Presumably the university eventually sold the patents off and they're sitting in some dusty corporate vault now, but maybe they'll expire soon.
I have been seriously considering building a keyboard along these lines for in person meetings, so I can essentially have it on the table in front of me and take notes without a display.
You might enjoy Ben Valack's videos on the topic (18 key keyboard)
Lasers are regular light, just all pointed in the same direction. The output of the laser (its capacity) is what makes it "high energy" and the number of photons hitting your retina is directly responsible for any warming of the tissue. A laser capable of just a bit of power, enough to outshine regular daylight by a little bit would be more than enough in most cases to make it feasible without concerns to your eyes damaging.
Laser just means the light is tightly focussed in a beam. It says nothing of intensity or wavelength. Those little pen-size laser pointers run on AAA batteries.
welcome back to the cyborg wearable kids at MIT (and elsewhere) in the 90s. An earlier version of this was the keyboard of choice : https://twiddler.tekgear.com/
Dumb question maybe, but why does it need to be transparent or else projected directly into your eye? What doesn't work (or wouldn't work for you) about a flip-down hard backed display?
A lot of distance approximation for humans is done with parallax. (not everyone, I worked with a guy who's brain wouldn't do distance approximation via parallax and so when he went to 3d movies with friends he wore an eyepatch)
If you completely obscure one eye's vision that no longer works and you've lost your depth perception.
Even having one eye blurry is sufficient for parallax to work. (I have very different prescriptions for each eye and if I take my glasses off I can still do distance vision even though one eye can't read any text further away than about 2 feet)
So try it - put an eye patch on for a day and see how many door frames you walk into... or the headache you may get trying to use focusing an eye at different distances (and that's fairly tiring for the lens muscle to be doing all day if it isn't properly exercised).
That makes sense. I wonder how these transparent displays handle the focus issues though. If you're not looking at your Monocle/Glasses, maybe 1in in front of your eye, but instead on something hundreds or thousands of times that distance behind it, how does the semitransparent information dense overlay appear at all, let alone in sharp focus?
The "Open source platform" part of the slideshow shows object detection at various distances. If it can box a car 20 feet away, a building 400 yards away, near objects on my breakfast plate, etc., then the plane of projection has to vary over a huge range as you look around (?). In general many of the demos I see seem to have remarkable things going on, in terms of what parts of the UI and world are in focus simultaneously.
Small, high resolution sensors often have a very large depth of field. As an example, if I take a 50mm lens on a full frame 35mm camera at f/8 and I focus at 50 feet away, everything from 25 feet to infinity is in acceptable focus.
While I'm making up some numbers... the sensor for that camera is 3.6mm x 2.7mm. A 5mm lens (that's a made up number) at f/3.5 (that's a guess) would have everything from 5 feet away to infinity be in acceptable focus. If it's a 3mm lens (again, made up number), then everything from 2 feet to infinity is in acceptable focus.
So having the stuff on your plate and the building in the distance both be in focus - yea, that's something reasonable.
But I am still interested in the overlay and how distracting that actually becomes.
What you described is how a camera sees the world, and how we see it in pictures. Part of what seems unconvincing to me about these concept ideas is that the demo UIs are superimposed over basically isometric, wide depth of field pictures and video.
So by plane of projection I mean the apparent distance of the virtual image of the UI. If your eyes are focused "through" the display on something a few yards away vs. relatively close, the light from the UI needs to come into your eye (or possibly glasses etc.) as though it is tracing rays in parallel from roughly the same distance away.
Quake style consoles and other HUDs work in video games because in reality the entire scene is coming from the plane of the display, some inches in front of you. If you tried to really focus on a game object 20 yards away, instead of on the screen in front of you, the HUD wouldn't be visible anymore.
In VR optics I believe the virtual screen is something like ~6 feet out in front of you. It is a compromise and still causes eye strain, but is workable perceptually. The issues for transparent AR seem much more complex.
Many of the far-out concepts and ideas that are mocked up for AR seem actually very achievable right now, or yesterday, if the display works like a phone or laptop or VR goggle's does, by re-projecting camera input from a plane a few (possibly virtual) inches or yards away from your eye. The value added though is pretty niche, because people have mobile phones anyway. But if the iPhone hadn't happened, a little display in front of the eye, a whole visor, or a pop-up wrist computer might have been possible to sell. It sounds kind of silly now, but that's what the expectation was in the 80s-90s. The idea of putting a computer on your head still seemed cool then.
I want to keep situational awareness. I want to be able to interact with an ever present computer as second nature. I want it to be a seamless part of my life. Bicycle for the mind and all that.
Obviously a full color display that could overlay reality at will with a wide FoV would be better, but I think the monochrome vector laser thing is achievable today and for me personally would be invaluable.
I'm confused why there is no YouTube in the social media section on the site. Demo video is what I need the most to make a decision about buying a product
Tilt Five is a pretty impressive demo of how good (and cheap!) retroreflection can be, but tbh I think the format is just kinda unusable outside of demonstrations. It'll always be a "small" surface, or it'll take up way too much room that you can't use for literally anything else, as you risk damaging the surface. And when it's big, you lose a ton of usable space because you're seeing the far stuff at a sharp angle, so you can't render "big" things over there.
Maybe a roll-down giant screen like for a projector display would work better, to give you a big virtual "hole" in a wall, but I've only seen it on those smallish tables. Possibly due to projection power/angle limits?
I like all these "cool" devices, love the idea, even got myself nreal air, but ... these are overpriced gadgets, I mean not worth the money that you have to pay for them as these do nothing well: not a good AR, not a good screen, not a convenient device, expensive.
Same. They're neat tech demos, but aside from turn-by-turn directions I don't really see any of them being particularly helpful aside from being a novelty. I really don't get the demand for weird social overlays on top of people, that seems distracting more than anything.
(Even demand for the turn-by-turn I think is largely driven by people allowing turn-by-turn to erode their natural sense of direction and ability to recall directions, but that's a separate point).
There are some useful cases I can imagine, but they would require so much collaboration by other contributors who aren't traditionally inclined towards good UX and software, and would be so niche that I don't see them driving a successful consumer product. In those cases the hardware isn't really the hard question it's "how do we deliver great experiences that can accomplish this without a hitch and without getting in the way?"
For instance, presenting overlays is very useful for something like an inspector to be able to easily cross reference maps, blueprints, and schematics. Uploading an instruction manual for building flat-pack furniture and have it able to literally tell you what to do. Meal prep services overlaying recipes. A tool for guided tours at museums (in which case you'd rent the AR device instead of owning your own). Maybe as a bike or running computer to overlay your time, speed, splits, or whatever.
But these are all such niche use cases I can't imagine a company like Apple, that generally aims to have product lines that sell in the hundreds of millions of units, would ever be in that market.
Many of those are viable "enterprise" use cases.. which usually means expensive hardware, but we're already seeing several of those deployed. Microsoft seems to have made a very good decision with HoloLens to ignore the consumer market and aim entirely at enabling high-end stuff. Overlays for workers isn't common yet, but it is in use. And the Army isn't giving up on IVAS despite continued teething problems.
And why do they have a patent on it if it's open source? Do they mean just the code? (It only looks like it has the MIT license, so assuming just the code).
Patented products are strictly worse. Imagine Monocle engineers come up with a slightly better design, but they realize it's not covered by their patent. What will they do?
As soon as one has a patent, then they are incentivized to sell their patented thing, not to switch to the thing that is better for their customers.
Patents are a way for lawyers to inject themselves where they don't belong to extract rent, making us all worse off. Patents are a negative sum game.
Maybe. That has non-zero cost and if the patent office is doing its job the new thing may not be patentable because it is better and there is prior art etc. etc.
The original concern is valid w.r.t. patents in addition to the many, many other concerns surrounding how they are used nowadays and indeed how much BS there has always been since Alexander Graham Bell stole the telephone idea from someone else at the patent office and then used patent enforcement to good effect. It is extremely widely believed around here that things are much worse with respect patents now.
"Patents encourage innovation" seems to have become an idea that marks one out as unbelievably naive. Or a lawyer.
It'd be cool if something like this could be used to selectively overlay thermal/ir/uv, night vision, or provide a colorblind accessibility tool. I think I would have a use for it then.
It already can be and has been done. Look at Mann's Thermocam EyeTap, or half of his other papers. He revisits mediating human vision with extrasensory overlays often.
I'll never forget seeing Steve Mann roaming the halls and conferences at ACM '97 in his full gear [0], antenna rising above the crowd. I remember thinking, this guy might be on to something here...
I have some AR glasses that plug into a phone, and I've been experimenting with running the camera feed and using it as a third eye (looking up on shelves, eye in the back of the head, etc). It's less disorienting than I expected, and in an integrated product I'm convinced it would be useful. The camera in the Monocle is unfortunately very old and cheap, but a modern phone main sensor could do wonders for accessibility. Besides what you've listed, written text magnification and highlighting has huge potential.
Would glasses like yours work for passive monitoring of some basic tui dashboards while hiking?
The ideal time for my personal daily commute/schedule to go on a 45 min daily hike/walk is 8am-9am. However at exactly that time some important automated business processes occur (markets related) that it’s best I keep an eye on.
If I could wear some glasses that display the information required safely I probably could lose 15 lbs easily as I’d be able to exercise regularly by hiking each morning, weather permitting. Since I also carry a MicroPC and LTE connectivity, and phone, so if I see a serious issue on AR glasses I could just stop somewhere and get access.
Maybe. They'd work for monitoring the dashboards. I use Samsung DeX to get a full desktop environment, and if you set its desktop to black (transparent, in the glasses) you can set floating windows in your vision for any apps. I think newer versions of Android support a native desktop mode too. The hike part is the question: they resemble dark sunglasses with extra-thick frames so they do obscure the view a bit. Walking around a neighborhood's probably fine but I wouldn't do a dirt trail. The glasses are Nreal Airs if you'd like to look them up.
Have you considered putting the dashboards on a smart watch?
A smart watch approach is good suggestion. I should look into learning how to code such a thing. I have an apple watch but I’m guessing for a bespoke watch face dashboard it might be easier to do on another smart watch platform more friendly to a Linux developer.
The Nreal Airs are definitely interesting, thank you. Could work for the while doing dishes example mentioned in another comment.
Yeah I'm curious about that, is an IMU enough to work well. The iPad/iPhone with the lidar it's rock solid when I tried it.
I don't know if you can offload processing onto your phone with this, lag is not good or just not possible. IMU wouldn't make sense (phone in your pocket) but yeah. Not sure how you'd do the VIO without the I.
Are there any relatively low cost 6DOF standalone tracking systems in the wild?
The closest thing I know are the Quest Pro controllers, but I don't understand why nobody is building such a thing as a plug an play solution that you can stick onto something and have immediate inside out positioning
It's not exactly what I was aiming for. Mocopi, SlimeVR, Haritora X, ROKOKO etc. are all IMU based body trackers. So they try to infer a skeleton from the individual trackers but they don't provide absolute positioning.
What I am hoping for is a single tracker that can absolutely position itself in the environment.
Basically a self contained SLAM in a box
Looks like you can program it yourself so that's great.
Get more familiarity with FPGA.
I'm thinking about it.
Well... I've supported SVR and Pine64 why not this.
Nothing else can use it to record videos of my stuff working at workbench (not a lot of onboard storage though).
Ordered one, will see how it goes. It definitely blocks a significant part of your vision but I figure I can toy around with it while I'm cooking food. 3D print some glasses for it.
I would be curious if you bought two, could the two talk to each other... but the hardware kind of blocks your vision/does not seem ideal... still it's something to tinker with.
uhh... this is supposedly my non doxxable account but I will post a video on it on YT once I have it. I've done other devices eg. Pinephones/SimulaOS/Remarkable 2, so whenever I get this I will post it.
looks like once bought they will ship from NY so I should have it in a week or so
Cameras are quickly becoming as ubiquitous and boring as light bulbs. I’m actually more interested in how a light bulb works than some of these projects that are targeted at software developers for learning purposes.
There are tons of these dev hardware projects you can buy which usually are centered around having a camera. “Did you ever want to see in slow motion? Now you can.” “Never miss a moment.” Go ahead a miss the moment. Big foot is fake. There is not much point to this anymore.
I see a lot of projects that hunt out developers as customers, but I can’t get excited for them most of the time. I think Playdate is one of the highest quality of this type of product. Behind that is some of the holographic 3D displays they are making now that use 3D pixels. They can go for $200. Pretty cheap.
Augmented reality is very lame to me. I think most of the hope for useful AR applications has been snuffed out long ago. Its the same question asked a different way, “Why can’t you just google the answer? Why do you need to do x,y,z (go to college, call plumber, whatever)” The answer is, invariably, nobody has the data! A lot of good data is being held hostage or destroyed by greedy corporations. Its a winner take all world. Companies are reluctant to have a brand new game.
Neat but (still) pricey. Not sold on the 640x480 display; it requires work to focus you eye to such a short distance and due to macular degeneration that will be increasingly impossible as you age. In my 20s I could focus on things a couple of centimeters (~1") away with the naked eye, in my 50 it's about 15cm (5-6"). You can still perceive color, movement etc so a ring of LEDs would convey information well, tut trying to read anything like a map or scrolling messages limits your market to the youth, and probably the myopic youth at that.
This is interesting, but I'm not sure what's the value add over a smartphone camera. Normally AR lets you project into stereoscopic vision and create 3d shapes in the user's vision - but the monocle sacrifices this. Perhaps portability, but it looks pretty chunky to wear. Smartphones are actually great for lot of AR tasks. Apps like the one that puts a chevron over mountain peaks when hiking continue to impress me. But it'd be cool if it finds niche - don't want to sound too critical.
Why is this product's website hides the product pictures, videos, detailed explanations how it works etc.? It might get 500+ votes on HN but if you remove the votes I'd think this is a scam product from china. How can you guys even trust and upvote a strange site product with not even a proper footer or team info. They are asking $349 for this thing too.
Probably a lot of the same use cases as google glass when you pair it with a smart phone. I don't think it's going to be "augmented reality", you're not going to have very much luck live-overlaying map data with the base unit. The built in FPGA means you can probably do some real time stuff if you really want.
The FPGA+micropython means that there's a lot you could do on device if you put in enough development time and converted your code to run directly on the FPGA, but it probably works best if paired with a smart phone. Someone like google could probably get this doing real time text translation off that FPGA, but I sure can't.
When paired with a phone you could use this to do real-time subtitles for the hard of hearing, display bike direction, there's all kinds of uses. Really all it has is bluetooth, the camera, some touch controls along the rim, and the display.
I recently build an AR indoor navigation system for people with cognitive impairments. Is there any way to include an inertial measurement unit (IMU) in this form factor so we can use VIO?
How about an app for travelers that translates what people around you say in real time, then displays speech bubbles in your native language next to them?
Looks like it has a clip to attach to the front of a pair of glasses.
Obviously suboptimal, really needs to be integrated into some old-timey welding goggles for the steampunk vibe.
At $349 it’s almost in the range where I’d buy it to play around with but I suspect it’d end up in a drawer like a bunch of other things that looked cool but I never got around to hacking on. Once the technology gets good enough where they can fit it into a regular pair of glasses then we can talk — though looking at this Bluetooth Aftershockz headset I use that day probably isn’t too far off.
I suspect that fitting something like this into what looks like standard prescription glasses will require some advances in manufacturing optical materials. I think its already possible to make the appropriate lenses to sufficient precision, but the cost is a bit prohibitive.
The other issue is that the projection display is a bit low-res for what I'd really like to see, but 1 pixel works out to ~0.5mm at 1m, so it might be ok. That might be precise enough for (for instance) identifying particular through-hole connections on a circuit board at a rework station. It's certainly adequate for identifying components on an engine.
Okay, to correct myself, looking through their documentation it seems at least the electronics schematics are OHL-P, but they only provide PDFs, and no actual source files.
There doesn't seem to be much on the physical hardware though.
What I'm gonna try is text reading. Like send in snippets of text from whatever news/twitter... and read it while doing dishes/waiting for food to cook. (assumption from bluetooth connection from phone)
Yeah I can do python, have not used micropython and I have not worked with FPGA's before so that's not what I'm trying to do now. The micropython sounds more tangible to me.
So I'm going to see about somehow feeding in text into the monocle via phone/BT... like an OBS situation. Then you could use your phone's screen as a mouse/input method. I know it seems pointless just use your phone... but you know... it's cool.
I saw those a while back looked interesting. I already pre-ordered a simula vr headset regarding both-eyes. I know they're not the same thing but yeah.
I’ve been looking more at the Nreals since I replied and they are potentially an out of the box solution to the text information while doing dishes.
I’m thinking a custom textual python terminal app on transparent terminal emulator on black background with tiling window manager. I have a GPD microPC so it seems if I add the Nreal adapter to connect hdmi. Then all it’s really missing is some type of input mechanism maybe via bluetooth.
The Nreals looks potentially good enough to be literally walking around safely if the textual TUI is designed in a way to not be too distracting.
A killer app for me would be floating name tags above people's heads, but I imagine that would require better HW than what is available here.
I'd love to see what this looks like actually installed on glasses, in different configurations.
Some simple AR use cases that I'd like to see: ]
* Turn by turn directions
* Incoming high priority notifications
* Weather alerts
* Front door notifications
* Upcoming meetings
* Interior building navigation
* Shopping Lists (hard to do the interactions on this)
The issue here is none of these are a killer feature, so I wouldn't be willing to pay more than a small premium to have them added on top of what I normally pay for frames, but since I already am paying $300-$500 for frames (before insurance), if someone offers the above for another $100 or $150, sure why not.