Saturday, 30 December 2023

How to convert absolute touch input to middle mouse button click and drags?

I bought StaffPad but unfortunately i don't have MS device to write on and use the benefits of the software. So writing with mouse on pc isn't a comfortable experience. I tried using spacedesk on my phone to try write with my capacitive stylus, but didn't work. when i tried writing the software thought that its a drag input. But I noticed that I can use my mouse's scroll wheel button to write on that software. So I'm trying to figure out a way to convert space desk's absolute touch input to middle mouse button (scroll wheel) click/drag to write in staffpad.

I tried approaching by this way:

# touch_to_middle_click_and_drag.py

import pyautogui
from pynput import mouse

# Variables to store the previous touch position
prev_x, prev_y = None, None

# Flag to track whether the middle mouse button is currently pressed
middle_button_pressed = False

def on_touch(x, y):
    global prev_x, prev_y

    if middle_button_pressed:
        # Calculate the movement since the previous position
        dx, dy = x - prev_x, y - prev_y
        pyautogui.moveRel(dx, dy)

    # Update the previous position
    prev_x, prev_y = x, y

def on_touch_press(x, y, button, pressed):
    global middle_button_pressed

    if pressed and button == mouse.Button.middle:
        # Simulate a middle mouse button press
        middle_button_pressed = True
        pyautogui.mouseDown(button='middle')

def on_touch_release(x, y, button, pressed):
    global middle_button_pressed

    if not pressed and button == mouse.Button.middle:
        # Simulate a middle mouse button release
        middle_button_pressed = False
        pyautogui.mouseUp(button='middle')

# Start listening for touch events
with mouse.Listener(on_move=on_touch, on_click=on_touch_press) as listener:
    listener.join()

I expected it to work as desired i.e. take absolute touch input and convert to scroll wheel button click and thus enabling me to write in staffpad. But its still taking dragging input when i try writing on my phone with spacedesk.



from How to convert absolute touch input to middle mouse button click and drags?

Friday, 29 December 2023

MLFLOW Artifacts stored on ftp server but not showing in ui

I use MLFLOW to store some parameters and metrics during training on a remote tracking server. Now I tried to also add a .png file as an artifact, but since the MLFLOW server is running remotely I store the file on a ftp server. I gave the ftp server address and path to MLFLOW by:

mlflow server --backend-store-uri sqlite:///mlflow.sqlite --default-artifact-root ftp://user:password@1.2.3.4/artifacts/ --host 0.0.0.0 &

Now I train a network and store the artifact by running:

mlflow.set_tracking_uri(remote_server_uri)
mlflow.set_experiment("default")
mlflow.pytorch.autolog()

with mlflow.start_run():
    mlflow.log_params(flow_params)
    trainer.fit(model)
    trainer.test()
    mlflow.log_artifact("confusion_matrix.png")
mlflow.end_run()

I save the .png file locally and then log it with mlflow.log_artifact("confusion_matrix.png") to the ftp server in the right folder corresponding to the experiment. Everything works so far, only that the artifact does not show up in the mlflow ui online. The logged parameters and metrics show up normally. The artifact panel stays empty and only shows

No Artifacts Recorded
Use the log artifact APIs to store file outputs from MLflow runs.

I found similar threads, but only of users having the same problem on local mlflow storages. Unfortunately, I could not apply these fixes to my problem. Somebody has an idea how to fix this?



from MLFLOW Artifacts stored on ftp server but not showing in ui

What are the advantages of using Depends in FastAPI over just calling a dependent function/class?

FastAPI provides a way to manage dependencies, like DB connection, via its own dependency resolution mechanism.

It resembles a pytest fixture system. In a nutshell, you declare what you need in a function signature, and FastAPI will call the functions(or classes) you mentioned and inject the correct results when the handler is called.

Yes, it does caching(during the single handler run), but can't we achieve the same thing using just @lru_cache decorator and simply calling those dependencies on each run? Am I missing something?



from What are the advantages of using Depends in FastAPI over just calling a dependent function/class?