I'm thrilled to see apps like these, because I believe the greatest potential AI has is to help people be really smart and educated with better judgment. Whether it improves critical thinking by asking challenging questions, or it has you think through complicated scenarios (even just things in your life), or teaches subject matter in an adaptive way that makes it really easy to get things, or just makes learning come to life (your own personal professor).
It's sad none of the leading voices in AI talk about this potential, have they ever? The ratio of talk about apocalypse or singularity to things like this probably tells you something..
They need to add a way to tell the system it has a bad question/answer combination (in case any of the devs are on here).
Example:
Question 2: What is the branch of science that studies the composition, properties, reactions and the structure of organic compounds?
I responded with Organic Chemistry, but it only accepts the answer Chemistry. While technically true, if someone is trying to learn, then this is misleading. That question specifically defines the organic branch of chemistry, but does not define it as a whole.
Totally understood. Part of adding a quiz allows you enter multiple accepted answers. And quiz quality depends on the person who entered the quiz. The intention is not for us to come up with curriculum for everything but for people to come up with their own quizzes which they can distribute to a class.
However, that particular quiz is one of our "launch" quizzes, so I went ahead and added that answer. Both chemistry and organic chemistry should work now.
I'd expect quiz creators might find embedded feedback from users to be of value - "Report a problem" or "Send comment to author". Even with play testing, quiz bugs turn up in use.
> Totally understood.
At least for me, I've found learning from user testing feedback to be like consulting, in that confidence that I totally understand what's going on, is a big red warning flag with sparkles. :) In part, because getting from what users say, to what they need, is such performance art.
In the web based text version if I type out my answer too fast, it seems to truncate some of the characters, e.g. I typed forbade and hit enter but it parsed only "forba"
I'm guessing you started in voice mode and then went to text so your mic is overwriting your text. If you stop the mic or use the text button, it shouldn't do that.
User experience: Came to N.A from HN to explore. Saw a long page of "Latest quizzes". Looked for but didn't find a "Recommended quizzes". My experience is most quizzes are poor, so this was discouraging. I chose a topic of interest, but which is often taught badly. "The Periodic Table of Elements Part One". The quiz page gave me a title and number of questions (57), and clicking on "Start quiz with text" gave me the first question. Being on firefox, I expected voice to not work. I considered switching to chromium, but didn't. I thought "this will be a long quiz". I looked for a way to see a list of the quiz's questions, but didn't find one. Rather than beginning to take the quiz, I left the site. Insufficiently engaged for a long quiz while unable to visualize what success would look like. Additional context: I've worked with speech in education, so that had reduced novelty pull for me. This report has been somewhat simplified. Fyi, FWIW.
Definitely, lots more planned. This is just the launch product. I've been thinking of integrating other platforms like Amazon Alexa. Also embeddable and APIs would be possible. Let me know your use case. You could actually hack the API now since its a SPA architecture with a GraphQL API.
As for the stack the frontend is VueJS and Vue Material. The backend is Python, Django, Postgres, Elastic Search and some Redis spread in for caching.
Its running on pretty limited resources right now, but I'll scale it up when I get some traction. So if it's a little slow sorry not sorry, but it seems to be holding up pretty good for using mostly free resources.
In addition to the order of a list being unnecessarily important as mentioned by other users, in the "Periodic Table" quizzes, any Element codes that form a word are interpreted as a word, so your system thinks that "He" (rhyming with she) is the correct answer for Helium, rather than H.E.
I didn't personally test them out, but I imagine Astatine and Beryllium would exhibit the same behaviour, and there are probably more.
It's kind of frustrating that the reply to an incorrect answer is a repeated "incorrect, try again." After a couple times, I just want to know the answer.
I love this! I would love to integrate this with a deduplicated, taxonomized, and crowdsourced database of facts I am working on - branches-app.com/theplan. Would love to get in contact with you
It's sad none of the leading voices in AI talk about this potential, have they ever? The ratio of talk about apocalypse or singularity to things like this probably tells you something..