One thing that I’ve noticed about game development interface is that we have two distinct types, Text and GUI. I’ve seen interfaces built with both and they work quite well. Why do we not build more systems with blended interfaces?
I was watching a talk by Lilli Thompson on creating a WebGL game editing platform. The talk itself is worth watching. But that’s not my point, while watching this I got the distinct feeling of designers and artists on my shoulder telling me the interface is too programmer centric.
The main concern I have is the editing of the shader code near the bottom.
The concern is that as a designer or artist I will tend to have a Wacom stylus in your dominant hand and your secondary hand centered on the keyboard. AKA, Two handed editing. With this stance, switching to the keyboard is an unwanted break in flow, forcing you to drop your stylus and move both hands over the keyboard.
It may be possible to teach yourself to touch type with one hand, many people do, but there are other ways to improve on this problem.
What we could do is gather a semantic analysis of the fragment shader currently being shown and provide quick editing of variables with widgets. Here is my cheap mock up.
Adding these simple widgets when we find features that could be easily edited by a stylus is not such a crazy idea. We have similar tools in Visual Studio and other IDEs. IDE widgets are just oriented to pulling items from a list compared to updating scalar values within a range.