Libraries like GLFW only inform the using programmer through certain callbacks of the type of "This is where the mouse moved: (x,y)" about userinput.
A "TLDR what is your meta-question?" ahead of time: Is there any tutorial, guideline, handbook or any other resource on what the implementors of software or frameworks like, Java Swing, JavaFX, QT, SWT, GTK, etc. thought, deliberated and decided on when it came to implementing their GUI toolkit / library /framework? Where can I find people that think about this stuff?
Going from here, getting the (x,y) coordinates of the mouse curser, calculating the difference from the previous instance of the call to get a (dx, dy) value, I would need to now inform UI Elements in my UI of what has happened so they can be updated accordingly.
The first question is, what kind of special UI Elements can exist in a generalized UI?
As one wants to organize their UI in a - possibly deep - hiearchy, the UI managing system only knows of a few top-level UI Elements. Because there are a lot of different ways to represent and order the UI Elements internally, the generalized UI Element type only supports answering one question: "What is the UI Element below this (x,y) coordinate for you?" (from now on called: get_hotspot(x,y) ). A UI Element that is not at this position will answer "Nothing", while a UI Element that is itself directly below the mouse cursor would return itself and a UI Element containing sub-elements that are themselves below the mouse-cursor would return the sub-element beneath if it is the case or itself if not.
In terms of what "special" UI Elements exist that need to be tracked separately, I can think of the following:
- The Element directly below the mouse cursor: The HOTSPOT
- The Element the user started a mouse-click on (but did not release yet): One DRAGGED Element for each mouse-button. Should this rather be called a GESTURE SOURCE Element?
- The Element that receives Keyboard or Gamepad Input, the FOCUS
Organization like this would allow me to direct all non-motion mouse events to the hotspot - if one exists - or discard them if there is none below the mouse at all.
The only special cases are informing the elements on which were a gesture, with any mouse buttons, started of the motion of the mouse outside their boundaries, as this information might be interesting for them. They also need to be informed if, when and where a gesture that started on them ended.
The most complicated function therefore is the callback of the mouse-motion "event" (there is no event-wrapping yet here, this is bare bones): Before anything happens all the DRAGGED elements need to be informed of the motion.
After this I'd need to manage the current HOTSPOT, if it exists: - The mouse could have simply moved within its boundries. - The mouse could have moved outside of its boundries. - The mouse could have moved into an UI Element inside the boundries of the current hotspot.
The last case is confounding for me because of all the different cases that can happen here and need to be considered:
- The mouse did not LEAVE the previous HOTSPOT, it just entered a new Element INSIDE of the previous one, so informing the previous HOTSPOT of the mouse leaving would be erroneous here.
- If the mouse enters an element and at the same time enters a sub-element (because the borders of the two elements are the same for example) then both the containing and the contained elements need to be informed of the mouse entering them. This is difficult if the get_hotspot(x,y) function returns only the hotspot directly beneath the questioned element because the overlaying UI manager would never know that two whole elements were just entered by the mouse. Using a STACK of HOTSPOTS (explained below) would cause the STACK to be erroneous in case this happens.
Similar confounding issues arise when an element is left by the mouse:
- Did the mouse leave the element into its parent? If so the parent does not need to be informed that the mouse now entered it because it never left.
- Did the mouse leave the element AND its parent at the same time? If so the parent does need to be informed of the mouse leaving, too.
This second confoundment leads me to believe that using a STACK of HOTSPOTS is a way to deal with these questions. If an event needs to be processed by a child-entitiy I'd inform it of the mouse having left, pop it from the stack and then recursively call the notification routine again so the next element on the stack is informed until nothing else changes.
Other questions arise like:
- Should an "enter"-notification be immediatly followed by a "move"-notification onto the very element the mouse just entered or should the motion be dispatched to it only upon the very next time the mouse is moved WITHIN the new HOTSPOT?
With this PREAMBLE complete I can provide some code of how I had deviced an INTERFACE for the UI Manager (Here in C++):
// Focus Management for Keyboard Focus
void request_focus(UI::Element &focus);
void release_focus();
//Drag and Drop
void attach_element(MouseButton button, UI::Element &drag);
void release_element(MouseButton button, UI::Element &drag);
//Callback functions from GLFW
virtual void notify_movement(int x, int y, int dx, int dy, input::Mouse &from);
virtual void notify_press(int x, int y, int button, input::Mouse &from);
virtual void notify_release(int x, int y, int button, input::Mouse &from);
virtual void notify_scroll(int x, int y, int dx, int dy, input::Mouse &from);
virtual void notify_key_press(int key, int scancode, int mods, input::Keyboard &from);
virtual void notify_key_release(int key, int scancode, int mods, input::Keyboard &from);
virtual void notify_key_repeat(int key, int scancode, int mods, input::Keyboard &from);
virtual void notify_typed(int codepoint, input::Keyboard &from);
//API to be used by the scene-manager
void add_element(UI::Element &element);
void remove_element(UI::Element &element);
//Keeping track of:
std::set<UI::Element *> elements; //all top-level elements
std::stack<UI::Element *> hotspot_stack;
UI::Element *current_focus;
UI::Element *current_dragged; //for left mouse button - repeat for all other supported mouse buttons ...
With the UI Elements having to support the following API:
virtual void notify_motion(int x, int y, int dx, int dy, UI::UserInterface &from);
virtual void notify_drag(int x, int y, int dx, int dy, int button, UI::UserInterface &from);
virtual void notify_press(int x, int y, int button, UI::UserInterface &from);
virtual void notify_release(int x, int y, int button, UI::UserInterface &from);
virtual void notify_drop(int x, int y, int button, Element *dragged, UI::UserInterface &from);
virtual void notify_scroll(int x, int y, int dx, int dy, UI::UserInterface &from);
virtual void notify_key_press(int key, int scancode, int mods, UI::UserInterface &from);
virtual void notify_key_release(int key, int scancode, int mods, UI::UserInterface &from);
virtual void notify_key_repeat(int key, int scancode, int mods, UI::UserInterface &from);
virtual void notify_typed(int codepoint, UI::UserInterface &from);
virtual void notify_enter(int x, int y, UI::UserInterface &from);
virtual void notify_leave(int x, int y, UI::UserInterface &from);
virtual void notify_focus_gained(UI::UserInterface &from);
virtual void notify_focus_lost(UI::UserInterface &from);
//Looking deep into the Code of JFX I mentioned one can also support:
virtual void notify_dragged_over([...]);
Why was all this thought put into it? Well if you have a ui-button that the user presses and holds with the left mouse button, then drags the mouse away from the mouse button, does NOT release the mouse button, reenters the ui-button within the same gesture and THEN releases the left mouse button, the button needs to be considered clicked, while the button needs to change states from "IDLE" to "FOCUSED" to "DEPRESSED" to "IDLE" (upon leaving) back to being "DEPRESSED" (upon reentering in the same motion)! (You can try this out for example on reddit when you do the motion mentioned above on the "fold down" buttons next to a post's meta-data).
And now for the actual question related to the title of this question:
Did I miss something? Do I think something horribly wrong? Is this approach at all feasable or is there some general way to do it that is accepted since the times of Windows 3.1 and I just can't find where this information is located?
How do I organize a UI hierarchy and inform elements of ALL events happening CORRECTLY?
Aucun commentaire:
Enregistrer un commentaire