If the content size changes (due to rotation for example) while the
window is maximized or fullscreen, the resize must be applied once
fullscreen and maximized are disabled.
The previous strategy consisted in storing the windowed size, computing
the target size on rotation, and applying it on window restoration. But
tracking the windowed size (while ignoring the non-windowed size) was
tricky, due to unspecified order of SDL events (e.g. size changes can be
notified before "maximized" events), race conditions when reading window
flags, different behaviors on different platforms...
To simplify the whole resize management, store the old content size (the
frame size, possibly rotated) when it changes while the window is
maximized or fullscreen, so that the new optimal size can be computed on
window restoration.
The window dimensions are integers, so resizing to fit the content may
not be exact.
When computing the optimal size, it could cause to reduce alternatively
the width and height by few pixels, making the "optimal size" unstable.
To avoid this problem, check if the optimal size is already correct
either by keeping the width or the height.
Move the window-to-frame coordinates conversion from the input manager
to the screen.
This will allow to apply more screen-related transformations without
impacting the input manager.
Add Ctrl+Left and Ctrl+Right shortcuts to rotate the display (the
content of the scrcpy window).
Contrary to --lock-video-orientation, the rotation has no impact on
recording, and can be changed dynamically (and immediately).
Fixes#218 <https://github.com/Genymobile/scrcpy/issues/218>
Now, get_window_size() returns the current window size (fullscreen or
not), while get_windowed_window_size() always returned the windowed size
(the size when fullscreen is disabled).
The SDL video subsystem is not necessary if we don't display the video.
Move the sdl_init_and_configure() function from screen.c to scrcpy.c,
because it is not only related to the screen display.
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)
It is very convenient when I play mobile game and watch video at the
same time.
Tested on Linux mint Cinnamon as well as Windows 10.
Signed-off-by: Yu-Chen Lin <npes87184@gmail.com>
SDL_CreateTexture() is called both during initialization and on frame
size change.
To avoid inconsistent changes to arguments value, factorize them to a
single function create_texture().
The High DPI support is enabled by default, so that the renderer use the
full definition of High DPI screens.
However, there are still mouse coordinates problems on some MacOS having
High DPI support (but not all), so expose a way to disable it.
Use high DPI if available.
Note that on Mac OS X, setting this flag is not sufficient:
> On Apple's OS X you must set the NSHighResolutionCapable Info.plist
> property to YES, otherwise you will not receive a High DPI OpenGL
> display.
<https://wiki.libsdl.org/SDL_CreateWindow#flags>
screen_render() should not be called on initialization:
1. it is useless, since the window is hidden until the first frame;
2. it writes an empty texture (probably green) to the renderer.
The SDL clean up does not crash anymore on exit, probably since the
memory corruption caused by calling SDLNet_TCP_Close() too early has
been resolved.
On startup, the client has to:
1. listen on a port
2. push and start the server to the device
3. wait for the server to connect (accept)
4. read device name and size
5. initialize SDL
6. initialize the window and renderer
7. show the window
From the execution of the app_process command to start the server on the
device, to the execution of the java main method, it takes ~800ms. As a
consequence, step 3 also takes ~800ms on the client.
Once complete, the client initializes SDL, which takes ~500ms.
These two expensive actions are executed sequentially:
HOST DEVICE
listen on port | |
push/start the server |----------------->|| app_process loads the jar
accept the connection . ^ ||
. | ||
. | WASTE ||
. | OF ||
. | TIME ||
. | ||
. | ||
. v X execution of our java main
connection accepted |<-----------------| connect to the host
init SDL || |
|| ,----------------| send frames
|| |,---------------|
|| ||,--------------|
|| |||,-------------|
|| ||||,------------|
init window/renderer | |||||,-----------|
display frames |<++++++-----------|
(many frames skipped)
The rationale for step 3 occuring before step 5 is that initializing
SDL replaces the SIGTERM handler to receive the event in the event loop,
so pressing Ctrl+C during step 5 would not work (since it blocks the
event loop).
But this is not so important; let's parallelize the SDL initialization
with the app_process execution (we'll just add a timeout to the
connection):
HOST DEVICE
listen on port | |
push/start the server |----------------->||app_process loads the jar
init SDL || ||
|| ||
|| ||
|| ||
|| ||
|| ||
accept the connection . ||
. X execution of our java main
connection accepted |<-----------------| connect to the host
init window/renderer | |
display frames |<-----------------| send frames
|<-----------------|
In addition, show the window only once the first frame is available to
avoid flickering (opening a black window for 100~200ms).
Note: the window and renderer are initialized after the connection is
accepted because they use the device information received from the
device.
Replace screen_update() by a higher-level screen_update_frame() handling
the whole frame updating, so that scrcpy.c just call it without managing
implementation details.
The video screen size on the client may differ from the real device
screen size (e.g. the video stream may be scaled down). As a
consequence, mouse events must be scaled to match the real device
coordinates.
For this purpose, make the client send the video screen size along with
the absolute pointer location, and the server scale the location to
match the real device size before injecting mouse events.
To control the device from the computer:
- retrieve mouse and keyboard SDL events;
- convert them to Android events;
- serialize them;
- send them on the same socket used by the video stream (but in the
opposite direction);
- deserialize the events on the Android side;
- inject them using the InputManager.