Glossary

Editor's Note Some of the more detailed definitions will be moved to other pages, it's just here to keep track of the information for now.

The Project

The OpenVoiceOS Project (OVOS)

All the repositories under OpenVoiceOS organization

The OpenVoiceOS Team

The team behind OVOS

Terms

Confirmations

Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit.

Implicit Confirmation

This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of situations.

Explicit Confirmation

This type of confirmation requires an input from the user to verify everything is correct.

Conversations

Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. Users may be used to systems that require them to separate input into different chunks.

Context

Allows for natural conversation by having skills set a "context" that can be used by subsequent handlers. Context could be anything from person to location. Context can also create "bubbles" of available intent handlers, to make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred.

You can find an example Tea Skill using conversational context on Github.

As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree.

Grapheme

All of the letters and letter combinations that represent a phoneme.

Home Screen

The OpenVoiceOS home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.

Intent

When an utterance is classified for its action and entities (e.g. 'turn on the kitchen lights' -> skill: home assistant, action: turn on/off, entity: kitchen lights)

MPRIS

(Media Player Remote Interfacing Specification) is a standard D-Bus interface which aims to provide a common programmatic API for controlling media players. More Inforamtion

mycroft.conf

Primary configuration file for the voice assistant. Possible locations: - /home/ovos/.local/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf - /etc/mycroft/mycroft.conf - /home/ovos/.config/mycroft/mycroft.conf - /etc/xdg/mycroft/mycroft.conf - /home/ovos/.mycroft/mycroft.conf More Information

OCP

OCP stands for OpenVoiceOS Common Play, it is a full fledged media player

OCP is a OVOSAbstractApplication, this means it is a standalone but native OVOS application with full voice integration

OCP differs from mycroft-core in several aspects:

  • Can run standalone, only needs a bus connection
  • OCP provides its own intents as if it was a skill
  • OCP provides its own GUI as if it was a skill
  • mycroft-core CommonPlay skill framework is disabled when OCP loads
  • OCP skills have a dedicated MycroftSkill class and decorators in ovos-workshop
  • OCP skills act as media providers, they do not (usually) handle playback
  • mycroft-core CommonPlay skills have an imperfect compatibility layer and are given lower priority over OCP skills
  • OCP handles several kinds of playback, including video
  • OCP has a sub-intent parser for matching requested media types
  • AudioService becomes a subsystem for OCP
  • OCP also has AudioService plugin component introducing a compatibility layer for skills using "old style audioservice api"
  • OCP integrates with MPRIS, it can be controlled from external apps, e.g. KdeConnect in your phone
  • OCP manages external MPRIS enabled players, you can voice control 3rd party apps without writing a skill for it via OCP

ovos-core

The central repository where the voice assistant "brain" is developed

OPM

OPM is the OVOS Plugin Manager, this base package provides arbitrary plugins to the ovos ecosystem

OPM plugins import their base classes from OPM making them portable and independent from core, plugins can be used in your standalone projects

By using OPM you can ensure a standard interface to plugins and easily make them configurable in your project, plugin code and example configurations are mapped to a string via python entrypoints in setup.py

Some projects using OPM are ovos-core , hivemind-voice-sat , ovos-personal-backend , ovos-stt-server and ovos-tts-server

OVOS-shell

The gui service in ovos-core will expose a websocket to the GUI client following the protocol outlined here

The GUI library which implements the protocol lives in the mycroft-gui repository, The repository also hosts a development client for skill developers wanting to develop on the desktop.

OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images, other distributions may offer alternative implementations such as plasma-bigscreen* or mycroft mark2

OVOS-shell is tightly coupled to PHAL, the following companion plugins should be installed if you are using ovos-shell

PHAL

Physical Hardware Abstraction Layer PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded "enclosure" from mycroft-core

Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin

PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images

Phoneme

The smallest phonetic unit in a language that is capable of conveying a distinction in meaning, as the m of mat and the b of bat in English.

Service

Snapcast

Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution. More Information

Prompts and Statements

You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response.

QML

Qt Markup Language, the language for Qt Quick UIs. More Information

The Mycroft GUI Framework uses QML.

STT

Speech To Text Also known as ASR, automated speech recognition, the process of converting audio into words

TTS

Text To Speech The process of generating the audio with the responses

Utterance

Command, question, or query from a user (eg 'turn on the kitchen lights')

Wake Word

A specific word or phrase trained used to activate the STT (eg 'hey mycroft')

XDG

XDG stands for "Cross-Desktop Group", and it's a way to help with compatibility between systems. More Information