Saturday 30 December 2023

How to convert absolute touch input to middle mouse button click and drags?

I bought StaffPad but unfortunately i don't have MS device to write on and use the benefits of the software. So writing with mouse on pc isn't a comfortable experience. I tried using spacedesk on my phone to try write with my capacitive stylus, but didn't work. when i tried writing the software thought that its a drag input. But I noticed that I can use my mouse's scroll wheel button to write on that software. So I'm trying to figure out a way to convert space desk's absolute touch input to middle mouse button (scroll wheel) click/drag to write in staffpad.

I tried approaching by this way:

# touch_to_middle_click_and_drag.py

import pyautogui
from pynput import mouse

# Variables to store the previous touch position
prev_x, prev_y = None, None

# Flag to track whether the middle mouse button is currently pressed
middle_button_pressed = False

def on_touch(x, y):
    global prev_x, prev_y

    if middle_button_pressed:
        # Calculate the movement since the previous position
        dx, dy = x - prev_x, y - prev_y
        pyautogui.moveRel(dx, dy)

    # Update the previous position
    prev_x, prev_y = x, y

def on_touch_press(x, y, button, pressed):
    global middle_button_pressed

    if pressed and button == mouse.Button.middle:
        # Simulate a middle mouse button press
        middle_button_pressed = True
        pyautogui.mouseDown(button='middle')

def on_touch_release(x, y, button, pressed):
    global middle_button_pressed

    if not pressed and button == mouse.Button.middle:
        # Simulate a middle mouse button release
        middle_button_pressed = False
        pyautogui.mouseUp(button='middle')

# Start listening for touch events
with mouse.Listener(on_move=on_touch, on_click=on_touch_press) as listener:
    listener.join()

I expected it to work as desired i.e. take absolute touch input and convert to scroll wheel button click and thus enabling me to write in staffpad. But its still taking dragging input when i try writing on my phone with spacedesk.



from How to convert absolute touch input to middle mouse button click and drags?

Friday 29 December 2023

MLFLOW Artifacts stored on ftp server but not showing in ui

I use MLFLOW to store some parameters and metrics during training on a remote tracking server. Now I tried to also add a .png file as an artifact, but since the MLFLOW server is running remotely I store the file on a ftp server. I gave the ftp server address and path to MLFLOW by:

mlflow server --backend-store-uri sqlite:///mlflow.sqlite --default-artifact-root ftp://user:password@1.2.3.4/artifacts/ --host 0.0.0.0 &

Now I train a network and store the artifact by running:

mlflow.set_tracking_uri(remote_server_uri)
mlflow.set_experiment("default")
mlflow.pytorch.autolog()

with mlflow.start_run():
    mlflow.log_params(flow_params)
    trainer.fit(model)
    trainer.test()
    mlflow.log_artifact("confusion_matrix.png")
mlflow.end_run()

I save the .png file locally and then log it with mlflow.log_artifact("confusion_matrix.png") to the ftp server in the right folder corresponding to the experiment. Everything works so far, only that the artifact does not show up in the mlflow ui online. The logged parameters and metrics show up normally. The artifact panel stays empty and only shows

No Artifacts Recorded
Use the log artifact APIs to store file outputs from MLflow runs.

I found similar threads, but only of users having the same problem on local mlflow storages. Unfortunately, I could not apply these fixes to my problem. Somebody has an idea how to fix this?



from MLFLOW Artifacts stored on ftp server but not showing in ui

What are the advantages of using Depends in FastAPI over just calling a dependent function/class?

FastAPI provides a way to manage dependencies, like DB connection, via its own dependency resolution mechanism.

It resembles a pytest fixture system. In a nutshell, you declare what you need in a function signature, and FastAPI will call the functions(or classes) you mentioned and inject the correct results when the handler is called.

Yes, it does caching(during the single handler run), but can't we achieve the same thing using just @lru_cache decorator and simply calling those dependencies on each run? Am I missing something?



from What are the advantages of using Depends in FastAPI over just calling a dependent function/class?

Thursday 28 December 2023

What's the correct way to use user local python environment under PEP668?

I have tried to install any python packages on Ubuntu 24.04, but found I cannot do that as in 22.04

PEP668 said it is for avoiding package conflict between system-wide package and user installed package.

example:

$ pip install setuptools --user
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.
    
    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

But if I do that with pipx:

$ pipx install setuptools 

No apps associated with package pip or its dependencies. If you are attempting to install a library, pipx should not be used. Consider using pip or a similar tool instead.

I am really confused with current rules and can not install any package to user local env.

How can I manage my user local environment now? And how can I use latest pip (not linux-distro version) and other packages by default for current user?

My Environment (docker):

FROM ubuntu:24.04

# add python
RUN apt install -y python3-pip python3-venv python-is-python3 pipx

USER ubuntu
WORKDIR /app

I know I can use some env manage tools (pyenv) to do that, but is there any built-in method to bring my user local env back?



from What's the correct way to use user local python environment under PEP668?

Homebrew installed python on mac returns weird response including "line 1: //: is a directory" & "line 7: syntax error near unexpected token `('"

When I run python3 I get the following

 % python3
/opt/homebrew/bin/python3: line 1: //: is a directory
/opt/homebrew/bin/python3: line 3: //: is a directory
/opt/homebrew/bin/python3: line 4: //: is a directory
/opt/homebrew/bin/python3: line 5: //: is a directory
/opt/homebrew/bin/python3: line 7: syntax error near unexpected token `('
/opt/homebrew/bin/python3: line 7: `����
                                        0� H__PAGEZERO�__TEXT@@__text__TEXT;�__stubs__TEX>�
           __cstring__TEXT�>��>__unwind_info__TEXT�?X�?�__DATA_CONST@@@@__got__DATA_CONST@�@�__DATA�@__bss__DATA�H__LINKEDIT����M4���3���0���0
                                                              PP�% 
                                                                   /usr/lib/dyldY*(�;��g�g�2 
     x

      /opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/Python
                8d'/usr/lib/libSystem.B.dylib&��)�� ��H����_��W��O��{������ �'X�C���`5�� �
          @������������������������T��J�_8� �_�qA��T�   ��*@8_���i � @�� ��<���R�� " ��3����5�! �����R�����0 Ձ` ը���` ��p ��R�R�� �BT�^ յ �����@99� Ձ] ��������9�\ ����R�Ri� �Tc������� �0 աZ �"�R{�t��
                                               ��C�� @��C���������_���!0 � �RN�����O��{������W���=��45�� �r�����#���!�RN�1T�@���T��RI���)��35�{B��OA�����_��
                                                                            � X@�p �A�R"�R(� �R#���{����
��@���'�{�����   �@�R    � �R��{����a

Trying to check the version with python3 --version returns the same thing

The error appears to happen randomly, it was not happening for a while (after installing python3.10 and then later python3.11 again), and then randomly started happening again one day


% od -tx1 /opt/homebrew/bin/python3 | head -n 5
0000000    2f  2f  20  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d
0000020    2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d
*
0000100    2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  2d  0a
0000120    0a  2f  2f  20  50  4c  45  41  53  45  20  44  4f  20  4e  4f

% file /opt/homebrew/bin/python3
/opt/homebrew/bin/python3: data


from Homebrew installed python on mac returns weird response including "line 1: //: is a directory" & "line 7: syntax error near unexpected token `('"

Wednesday 27 December 2023

How to have the dialog for choosing download location appeared in the frontend, before the file gets downloaded, using FastAPI?

I have a GET endpoint that should return a huge file (500Mb). I am using FileResponse to do that(code is simplified for clarity reasons):

 async def get_file()
       headers = {"Content-Disposition": f"attachment; filename={filename}"}
       return FileResponse(file_path, headers=headers)

The problem is that I have to wait on frontend till that file is completely downloaded until I am shown this dialog: enter image description here

And then this file is saved instantly.

So for example I have a file with size 500 MB, when I click download on UI I have to wait a minute or something till the "Save dialog" is displayed. Then when I click "Save" the file is saved instantly. Obviously the frontend was waiting for the file to be downloaded, what I need is that frontend shows "save dialog" instantly and then downloading starts in the background.

What I need is this: Dialog is shown instantly and then the user waits for the download to finish after he clicks 'Save'.

So how can I achieve that?



from How to have the dialog for choosing download location appeared in the frontend, before the file gets downloaded, using FastAPI?

Tuesday 26 December 2023

Not Found for an API route

In my Next.js project I have main/app/api/worker-callback/route.ts file:

import { NextApiResponse } from "next";
import { NextResponse } from "next/server";

type ResponseData = {
    error?: string
};

export async function POST(req: Request, res: NextApiResponse<ResponseData>) {
    if (req.headers.get('Authorization') !== process.env.BACKEND_SECRET!) {
        res.status(403).json({ error: "Allowed only by backend" });
        // return Response.json({ error: "Allowed only by backend" }, { status: 403 });
    }
    return NextResponse.json({});
}

But when I query it, I get error 404. Why?

curl -d '' -v -o/dev/null -H "accept: application/json" http://localhost:3000/api/worker-callback
...
< HTTP/1.1 404 Not Found
...

Note that HTML pages in main/app work just fine. I build it by the cd main && next build command.



from Not Found for an API route

Tuesday 19 December 2023

Shared Runtime gives Error in Office Word JS Add-in

when I this Shared Runtime Requirements in my manifest file my add-in gives Error and not working. I am using this code for

    <Requirements>
      <Sets DefaultMinVersion="1.1">
        <Set Name="SharedRuntime" MinVersion="1.1"/>
      </Sets>
   </Requirements>

and

    <Runtimes>
       <Runtime resid="contoso.taskpane.url" lifetime="long"></Runtime>
    </Runtimes>

Shared Runtime code image

facing error like this

Shared Runtime Error

When I remove Requirements from my manifest then my add-in is working. When I use the same code in another system there its work why it is not working in my system?



from Shared Runtime gives Error in Office Word JS Add-in

Friday 15 December 2023

cannot appear as a child of and cannot appear as a child of

Please take a look at the schematic structure of my table. I removed all unnecessary stuff for ease of reading. As you can see, there is a header and there is a body. Only the body is wrapped in ScrollArea (https://www.radix-ui.com/primitives/docs/components/scroll-area), since I want the user to be able to scroll only the body and not the entire table

<table>
 <thead>
  <tr>
    <th>Name</th>
    <th>Surname</th>
    <th>City</th>
  </tr>
 </thead>
 <ScrollArea.Root>
  <ScrollArea.Viewport>
   {data.map((person) => (
     <tbody>
      <tr>
       <td>{person.name}</td>
       <td>{person.surname}</td>
       <td>{person.city}</td>
      </tr>
     </tbody>
  ))}
  </ScrollArea.Viewport>
  <ScrollArea.Scrollbar/>
 </ScrollArea.Root>
</table>

And so, when I go to the page with this table, I receive two warnings in the console:

Warning: validateDOMNesting(...): div cant appear as a child of table.

Warning: validateDOMNesting(...): tbody cannot appear as a child of div.

If I remove ScrollArea from the table, then the warnings disappear. But ScrollArea is very important to me for moving through long tables.

Tell me how can I get rid of these warnings?



from cannot appear as a child of
and
cannot appear as a child of

React Native Touch Through Flatlist

For "react-native": "^0.70.5"

Requirement:

  • Flatlist as an overlay above Clickable elements
  • Flatlist header has a transparent area, with pointerEvents="none" to make the elements below clickable and yet allow the Flatlist to scroll. enter image description here

Issues with some possible approaches

  1. pointerEvents="none" doesn't work with Flatlist, as internally how Flatlist is built it will block the events at all values of pointerEvents. It's the same with Scrollview as well.
  2. react-native-touch-through-view (the exact library I need) doesn't work with RN 0.70.2, library is outdated. After fixing the build issues, touch events are not propagating to the clickable elements.
  3. Created a custom component ScrollableView, as pointerEvents with View work well. With this adding pointerEvents to none on parts of the children, lets the touch event to propagate to elements below.
  • This is working well on Android, but failing on iOS.
  • Also the scrolling of the view is not smooth.
  • Requires further handling for performance optimisation for long lists
import React, { useState, useRef } from 'react';
import { View, PanResponder, Animated } from 'react-native';

const ScrollableView = ({children, style, onScroll}) => {
    const scrollY = useRef(new Animated.Value(0)).current;
    const lastScrollY = useRef(0);
    const scrollYClamped = Animated.diffClamp(scrollY, 0, 1000);

    const panResponder = useRef(
        PanResponder.create({
            onStartShouldSetPanResponder: () => true,
            onPanResponderMove: (_, gestureState) => {
                scrollY.setValue(lastScrollY.current + gestureState.dy);
            },
            onPanResponderRelease: (_, { vy, dy }) => {
                lastScrollY.current += dy;
                Animated.spring(scrollY, {
                    toValue: lastScrollY.current,
                    velocity: vy,
                    tension: 2,
                    friction: 8,
                    useNativeDriver: false,
                }).start();
            },

        })
    ).current;

    const combinedStyle = [
        {
            transform: [{ translateY: scrollYClamped }],
        },
        style
    ];

    return (
        <Animated.View
            {...panResponder.panHandlers}
            pointerEvents="box-none"
            style={combinedStyle}
        >
            {children}
        </Animated.View>
    );
};

export default ScrollableView;

Any solution to any of the above three approaches is appreciated.



from React Native Touch Through Flatlist

Thursday 14 December 2023

Trying to refresh data model using pyadomd but getting namespace cannot appear under Envelope/Body/Execute/Command

I am using pyadomd to connect into azure analysis services and trying to refresh a data model. The connection is successful but getting the following error "namespace http://schemas.microsoft.com/analysisservices/2003/engine) cannot appear under Envelope/Body/Execute/Command". I am assuming i am doing the XMLA command incorrectly? could be the way i have structured the XMLA command? i think xmlns namespace could has been deprecated or no longer available? any help greatly appreciated since there's not much documentation on this. before i run the script i run az login using the azure-cli package so can authenticate locally. i am using python 3.8

full code script

from sys import path
from azure.identity import DefaultAzureCredential

# Add the path to the ADOMD.NET library
path.append('\\Program Files\\Microsoft.NET\\ADOMD.NET\\150')

# Import the Pyadomd module
from pyadomd import Pyadomd

# Set database and data source information
database_name = 'database_name'
data_source_suffix = 'data_source_suffix'
resource_uri = "https://uksouth.asazure.windows.net"
model_name = 'model_name'

# Get the access token using Azure Identity
credential = DefaultAzureCredential()
token = credential.get_token(resource_uri)
access_token = token.token

# Construct the connection string for Azure Analysis Services
conn_str = f'Provider=MSOLAP;Data Source=asazure://uksouth.asazure.windows.net/{data_source_suffix};Catalog={database_name};User ID=;Password={access_token};'

try:
    # Establish the connection to Azure Analysis Services
    with Pyadomd(conn_str) as conn:
        print("Connection established successfully.")
        # Create a cursor object
        with conn.cursor() as cursor:
            # XMLA command to refresh the entire model
            refresh_command = f"""
            <Refresh xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
                <Object>
                    <DatabaseID>{database_name}</DatabaseID>
                    <CubeID>{model_name}</CubeID>
                </Object>
                <Type>Full</Type>
            </Refresh>
            """

            # Execute the XMLA refresh command
            cursor.execute(refresh_command)
            print("Data model refresh initiated.")

except Exception as e:
    print(f"An error occurred: {e}")

full output

Connection established successfully.
An error occurred: The Refresh element at line 8, column 87 (namespace http://schemas.microsoft.com/analysisservices/2003/engine) cannot appear under Envelope/Body/Execute/Command.

Technical Details:
RootActivityId: 9f82c29d-f7dc-4438-a6a3-90b5ccef9818
Date (UTC): 12/12/2023 2:15:07 PM
   at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters)
   at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior)

the first link has information about namespace used and second link contains list of namespaces available.

https://learn.microsoft.com/en-us/analysis-services/multidimensional-models-scripting-language-assl-xmla/developing-with-xmla-in-analysis-services?view=asallproducts-allversions

https://learn.microsoft.com/en-us/openspecs/sql_server_protocols/ms-ssas/68a9475e-27d6-413a-9786-95bb19652b19

What i have tried

using alternative namespaces such as http://schemas.microsoft.com/analysisservices/2022/engine/922/922 http://schemas.microsoft.com/analysisservices/2019/engine http://schemas.microsoft.com/analysisservices/2012/engine

getting the same error so assuming its not the namespace.

tried using soap envelope format that didnt work either

refresh_command = f"""
            <Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/">
                <Body>
                    <Execute xmlns="urn:schemas-microsoft-com:xml-analysis">
                        <Command>
                            <Refresh xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
                                <Type>Full</Type>
                                <Object>
                                    <DatabaseID>{database_name}</DatabaseID>
                                </Object>
                            </Refresh>
                        </Command>
                    </Execute>
                </Body>
            </Envelope>
            """


from Trying to refresh data model using pyadomd but getting namespace cannot appear under Envelope/Body/Execute/Command

PWA won't go fullscreen after deinstall/install

My PWA worked happily in manifest -> display: fullscreen for years. Yesterday I came across a recent Chrome/Android bug where my PWA would no longer auto rotate from lansdscape/portrait. Having seen in SO that the fix required a de-install and re-install of my PWA I went ahead.

Good news, is my PWA is once again responsive to orientation change.

Bad news, my PWA with not go back to fullscreen mode and maxes out at standalone :-(

I have upped the version on my cache to hopefully reload all files but still no joy.

Please help!

Yes, you have to grant geolocation access but all the code is available on github.

To reproduce: -

Please navigate to this URL and

  • If you don't have a Google Maps API key just click "No Maps!"
  • Wait a bit and you will be prompted to install the PWA.
  • Click install
  • Exit Android Chrome
  • Go to your phone "Apps" scroll across and you should see Brotkrumen with a ginger-bread house icon.
  • The app launches in standalone mode and not fullscreen.
  • Watch the wicked witch capture Hansel and Gretel

EDIT 1 Further info. If you open Brotkrumen in Chrome and then press the kebab menu, you'll see the option to "Open Brotkrumen". If you click that it opens in full screen but with black space at the top. (Just like the image below)

ScreenShot example of PWA trip replay



from PWA won't go fullscreen after deinstall/install

Wednesday 13 December 2023

"Uncaught TypeError: Illegal invocation" while overriding EventSource

Playground link: https://www.w3schools.com/html/tryit.asp?filename=tryhtml5_sse

Problem: I am trying to override EventSource in such a way that there will be a console.log whenever onmessage triggers.

Code:

<!DOCTYPE html>
<html>
<body>

<h1>Getting server updates</h1>
<div id="result"></div>

<script>
if (typeof EventSource !== "undefined") {

  const original = EventSource;

  // part of chrome extension
  window.EventSource = function EventSource(url) {
    const ori = new original(url);

    Object.getPrototypeOf(ori).onmessage = function (event) {
      console.log(event); // log on every message
      ori.onmessage(event);
    };

    return ori;
  };
  
  
  
  // actual source code
  var source = new EventSource("demo_sse.php");
  source.onmessage = function (event) {
    document.getElementById("result").innerHTML += event.data + "<br>";
  };
} else {
  document.getElementById("result").innerHTML = "Sorry, your browser does not support server-sent events...";
}
</script>

</body>
</html>

But I am getting this error:

Error:

VM461:10 Uncaught TypeError: Illegal invocation
    at new EventSource (<anonymous>:10:42)
    at <anonymous>:21:16
    at submitTryit (tryit.asp?filename=tryhtml5_sse:853:17)
    at HTMLButtonElement.onclick (tryit.asp?filename=tryhtml5_sse:755:133)
EventSource @ VM461:10
(anonymous) @ VM461:21
submitTryit @ tryit.asp?filename=tryhtml5_sse:853
onclick @ tryit.asp?filename=tryhtml5_sse:755
uic.js?v=1.0.5:1 Uncaught ReferenceError: adngin is not defined
    at uic_r_p (uic.js?v=1.0.5:1:54492)
    at HTMLButtonElement.onclick (tryit.asp?filename=tryhtml5_sse:755:148)
uic_r_p @ uic.js?v=1.0.5:1
onclick @ tryit.asp?filename=tryhtml5_sse:755


from "Uncaught TypeError: Illegal invocation" while overriding EventSource

Monday 11 December 2023

How to convert AsyncIterable to asyncio Task

I am using Python 3.11.5 with the below code:

import asyncio
from collections.abc import AsyncIterable


async def iterable() -> AsyncIterable[int]:
    yield 1
    yield 2
    yield 3


# How can one get this async iterable to work with asyncio.gather?
asyncio.gather(iterable())

How can one get an AsyncIterable to work with asyncio tasks (e.g. for use with asyncio.gather)?



from How to convert AsyncIterable to asyncio Task

Wednesday 6 December 2023

Conformal prediction intervals insample data nixtla

Given the documentation of nixtla y dont find any way to compute the prediction intervals for insample prediction (training data) but just for future predicitons.

I put an example of what I can achieve but just to predict (future).

from statsforecast.models import SeasonalExponentialSmoothing, ADIDA, ARIMA
from statsforecast.utils import ConformalIntervals

# Create a list of models and instantiation parameters 
intervals = ConformalIntervals(h=24, n_windows=2)

models = [
    SeasonalExponentialSmoothing(season_length=24,alpha=0.1, prediction_intervals=intervals),
    ADIDA(prediction_intervals=intervals),
    ARIMA(order=(24,0,12), season_length=24, prediction_intervals=intervals),
]

sf = StatsForecast(
    df=train, 
    models=models, 
    freq='H', 
)

levels = [80, 90] # confidence levels of the prediction intervals 

forecasts = sf.forecast(h=24, level=levels)
forecasts = forecasts.reset_index()
forecasts.head()

So my goal will be to do something like:

 forecasts = sf.forecast(df_x, level=levels)

So we can have any prediction intervals in the training set.



from Conformal prediction intervals insample data nixtla

Why does my Firefox browser take over a hundred times longer to upload large files compared to Google Chrome?

I have a server-side program on localhost, and I use a form to upload a 300MB file. However, I've noticed a discrepancy between Google Chrome and Firefox – Google Chrome returns a successful result within a few seconds, while Firefox takes around 3 minutes. The code and network environment are consistent. What could be causing this discrepancy?

Here is my code:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title><!DOCTYPE html>
    <html lang="en">
    <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Upload</title>
</head>
<body>

<h2>Upload</h2>

<form action="http://127.0.0.1:9527/api/upgrade/upload" method="post" enctype="multipart/form-data">
  <label for="fileInput">chooseFile:</label>
  <input type="file" id="fileInput" name="files">

  <button type="submit">Upload File</button>
</form>

</body>
</html>

I have tried changing the server-side programming languages (Java, Go) and upgrading versions, but they are already the latest.



from Why does my Firefox browser take over a hundred times longer to upload large files compared to Google Chrome?

Tensorflow JS learning data too big to fit in memory at once, how to learn?

I have the problem that my dataset became too large to fit in memory at once in tensorflow js. What are good solutions to learn from all data entries? My data comes from a mongodb instance and needs to be loaded asynchronously.

I tried to play with generator functions, but couldnt get async generators to work yet. I was also thinking that maybe fitting the model in batches to the data would be possible?

It would be great if someone could provide me with a minimal example on how to fit on data that is loaded asynchronously through either batches or a database cursor.

For example when trying to return promises from the generator, I get a typescript error.

    const generate = function* () {
        yield new Promise(() => {});
    };

    tf.data.generator(generate);

Argument of type '() => Generator<Promise<unknown>, void, unknown>' is not assignable to parameter of type '() => Iterator<TensorContainer, any, undefined> | Promise<Iterator<TensorContainer, any, undefined>>'.


Also using async generators doesnt work:

Async generators result in a type error

tf.data.generator(async function* () {})

throws Argument of type '() => AsyncGenerator<any, void, unknown>' is not assignable to parameter of type '() => Iterator<TensorContainer, any, undefined> | Promise<Iterator<TensorContainer, any, undefined>>'.



from Tensorflow JS learning data too big to fit in memory at once, how to learn?

How can I fix my perceptron to recognize numbers?

My exercise is to train 10 perceptrons to recognize numbers (0 - 9). Each perceptron should learn a single digit. As training data, I've created 30 images (5x7 bmp). 3 variants per digit.

I've got a perceptron class:

import numpy as np


def unit_step_func(x):
    return np.where(x > 0, 1, 0)


def sigmoid(x):
    return 1 / (1 + np.exp(-x))


class Perceptron:
    def __init__(self, learning_rate=0.01, n_iters=1000):
        self.lr = learning_rate
        self.n_iters = n_iters
        self.activation_func = unit_step_func
        self.weights = None
        self.bias = None
        #self.best_weights = None
        #self.best_bias = None
        #self.best_error = float('inf')

    def fit(self, X, y):
        n_samples, n_features = X.shape

        self.weights = np.zeros(n_features)
        self.bias = 0

        #self.best_weights = self.weights.copy()
        #self.best_bias = self.bias

        for _ in range(self.n_iters):
            for x_i, y_i in zip(X, y):
                linear_output = np.dot(x_i, self.weights) + self.bias
                y_predicted = self.activation_func(linear_output)

                update = self.lr * (y_i - y_predicted)
                self.weights += update * x_i
                self.bias += update

            #current_error = np.mean(np.abs(y - self.predict(X)))
            #if current_error < self.best_error:
            #    self.best_weights = self.weights.copy()
            #    self.best_bias = self.bias
            #    self.best_error = current_error

    def predict(self, X):
        linear_output = np.dot(X, self.weights) + self.bias
        y_predicted = self.activation_func(linear_output)
        return y_predicted

I've tried both, unit_step_func and sigmoid, activation functions, and pocketing algorithm to see if there's any difference. I'm a noob, so I'm not sure if this is even implemented correctly.

This is how I train these perceptrons:

import numpy as np
from PIL import Image
from Perceptron import Perceptron
import os

def load_images_from_folder(folder, digit):
    images = []
    labels = []
    for filename in os.listdir(folder):
        img = Image.open(os.path.join(folder, filename))
        if img is not None:
            images.append(np.array(img).flatten())
            label = 1 if filename.startswith(f"{digit}_") else 0
            labels.append(label)
    return np.array(images), np.array(labels)


digits_to_recognize = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

perceptrons = []
for digit_to_recognize in digits_to_recognize:
    X, y = load_images_from_folder("data", digit_to_recognize)
    p = Perceptron()
    p.fit(X, y)
    perceptrons.append(p)

in short:

training data filename is in the format digit_variant. As I said before, each digit has 3 variants,

so for digit 0 it is 0_0, 0_1, 0_2,

for digit 1 it's: 1_0, 1_1, 1_2,

and so on...

load_images_from_folder function loads 30 images and checks the name. If digit part of the name is the same as digit input then it appends 1 in labels, so that the perceptron knows that it's the desired digit.

I know that it'd be better to load these images once and save them in some array of tuples, for example, but I don't care about the performance right now (I won't care later either).

for digit 0 labels array is [1, 1, 1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

for digit 1 labels array is [0,0,0, 1, 1, 1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

and so on...

then I train 10 perceptrons using this data.

This exercise also requires to have some kind of GUI that allows me to draw a number. I've choosen pygame, I could use pyQT, it actually does not matter.

This is the code, you can skip it, it's not that important (except for on_rec_button function, but I'll address on it):

import pygame
import sys

pygame.init()

cols, rows = 5, 7
square_size = 50
width, height = cols * square_size, (rows + 2) * square_size
screen = pygame.display.set_mode((width, height))
pygame.display.set_caption("Zad1")

rec_button_color = (0, 255, 0)
rec_button_rect = pygame.Rect(0, rows * square_size, width, square_size)

clear_button_color = (255, 255, 0)
clear_button_rect = pygame.Rect(0, (rows + 1) * square_size + 1, width, square_size)

mouse_pressed = False

drawing_matrix = np.zeros((rows, cols), dtype=int)


def color_square(x, y):
    col = x // square_size
    row = y // square_size

    if 0 <= row < rows and 0 <= col < cols:
        drawing_matrix[row, col] = 1


def draw_button(color, rect):
    pygame.draw.rect(screen, color, rect)


def on_rec_button():
    np_array_representation = drawing_matrix.flatten()

    for digit_to_recognize in digits_to_recognize:
        p = perceptrons[digit_to_recognize]
        predicted_number = p.predict(np_array_representation)
        if predicted_number == digit_to_recognize:
            print(f"Image has been recognized as number {digit_to_recognize}")


def on_clear_button():
    drawing_matrix.fill(0)


while True:
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            pygame.quit()
            sys.exit()

        elif event.type == pygame.MOUSEBUTTONDOWN and event.button == 3:
            mouse_pressed = True

        elif event.type == pygame.MOUSEBUTTONUP and event.button == 3:
            mouse_pressed = False

        elif event.type == pygame.MOUSEMOTION:
            mouse_x, mouse_y = event.pos
            if mouse_pressed:
                color_square(mouse_x, mouse_y)

        elif event.type == pygame.MOUSEBUTTONDOWN and event.button == 1:
            if rec_button_rect.collidepoint(event.pos):
                on_rec_button()
            if clear_button_rect.collidepoint(event.pos):
                on_clear_button()

    for i in range(rows):
        for j in range(cols):
            if drawing_matrix[i, j] == 1:
                pygame.draw.rect(screen, (255, 0, 0), (j * square_size, i * square_size, square_size, square_size))
            else:
                pygame.draw.rect(screen, (0, 0, 0), (j * square_size, i * square_size, square_size, square_size))

    draw_button(rec_button_color, rec_button_rect)
    draw_button(clear_button_color, clear_button_rect)

    pygame.display.flip()

so, now that I run the app, draw the digit 3, and click the green button that runs on_rec_button function, I expected to see Image has been recognized as number 3, but I get Image has been recognized as number 0.

This is what I draw:

enter image description here

These are training data:

enter image description here enter image description here enter image description here

These are very small because of the resolution 5x7 that was required in the exercise.

When I draw the digit 1 then I get 2 results: Image has been recognized as number 0 Image has been recognized as number 1

enter image description here

What should I do to make it work the way I want? I don't expect this to work 100% accurate but I guess it could be better.



from How can I fix my perceptron to recognize numbers?

Tuesday 5 December 2023

Why is this AutoKeras NAS failing?

I am using

  1. nVidia GeForce GTX 780 (Kepler)
  2. Driver Version: 470.223.02
  3. CUDA Toolkit v11.4.0
  4. cuDNN v8.2.4
  5. TensorFlow and Keras v2.8.0
  6. AutoKeras v1.0.17
  7. Ubuntu 20.04

=======================

I have two directories, train_data_npy and valid_data_npy where there are 3013 and 1506 *.npy files, respectively.

Each *.npy file has 12 columns of float types, of which the first nine columns are features and the last three columns are one-hot-encoded labels of three classes.

The following Python script's task is to load those *.npy files in chunks so that the memory is not overflowed while searching for a neural network model.

However, the script is failing.

What exactly is the issue with the given script?

Why is the script failing?

Or, is it not about the script but rather about the installation issues of CUDA, TF, or AutoKeras?

# File: cnn_search_by_chunk.py
import numpy as np
import tensorflow as tf
import os
import autokeras as ak

N_FEATURES = 9
BATCH_SIZE = 100

def get_data_generator(folder_path, batch_size, n_features):
    """Get a generator returning batches of data from .npy files in the specified folder.

    The shape of the features is (batch_size, n_features).
    """
    def data_generator():
        files = os.listdir(folder_path)
        npy_files = [f for f in files if f.endswith('.npy')]

        for npy_file in npy_files:
            data = np.load(os.path.join(folder_path, npy_file))
            x = data[:, :n_features]
            y = data[:, n_features:]
            y = np.argmax(y, axis=1)  # Convert one-hot-encoded labels back to integers

            for i in range(0, len(x), batch_size):
                yield x[i:i+batch_size], y[i:i+batch_size]

    return data_generator

train_data_folder = '/home/my_user_name/original_data/train_data_npy'
validation_data_folder = '/home/my_user_name/original_data/valid_data_npy'

train_dataset = tf.data.Dataset.from_generator(
    get_data_generator(train_data_folder, BATCH_SIZE, N_FEATURES),
    output_signature=(
        tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32),
        tf.TensorSpec(shape=(None,), dtype=tf.int32)  # Labels are now 1D integers
    )
)

validation_dataset = tf.data.Dataset.from_generator(
    get_data_generator(validation_data_folder, BATCH_SIZE, N_FEATURES),
    output_signature=(
        tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32),
        tf.TensorSpec(shape=(None,), dtype=tf.int32)  # Labels are now 1D integers
    )
)

clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=train_dataset, validation_data=validation_dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(validation_dataset))
my_user_name@192:~/my_project_name_v2$ python3 cnn_search_by_chunk.py
2023-11-29 20:05:53.532005: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Using TensorFlow backend
2023-11-29 20:05:55.467804: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

Search: Running Trial #1

Hyperparameter    |Value             |Best Value So Far
structured_data...|True              |?
structured_data...|2                 |?
structured_data...|False             |?
structured_data...|0                 |?
structured_data...|32                |?
structured_data...|32                |?
classification_...|0                 |?
optimizer         |adam              |?
learning_rate     |0.001             |?

Epoch 1/1000
33143/33143 [==============================] - 149s 4ms/step - loss: 0.0670 - accuracy: 0.9677 - val_loss: 0.0612 - val_accuracy: 0.9708
Epoch 2/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0625 - accuracy: 0.9697 - val_loss: 0.0598 - val_accuracy: 0.9715
Epoch 3/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0617 - accuracy: 0.9702 - val_loss: 0.0593 - val_accuracy: 0.9717
Epoch 4/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0614 - accuracy: 0.9703 - val_loss: 0.0591 - val_accuracy: 0.9718
Epoch 5/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0612 - accuracy: 0.9705 - val_loss: 0.0590 - val_accuracy: 0.9719
Epoch 6/1000
33143/33143 [==============================] - 145s 4ms/step - loss: 0.0610 - accuracy: 0.9707 - val_loss: 0.0588 - val_accuracy: 0.9721
Epoch 7/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0608 - accuracy: 0.9707 - val_loss: 0.0586 - val_accuracy: 0.9721
Epoch 8/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0607 - accuracy: 0.9709 - val_loss: 0.0585 - val_accuracy: 0.9723
Epoch 9/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0605 - accuracy: 0.9710 - val_loss: 0.0584 - val_accuracy: 0.9723
Epoch 10/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0604 - accuracy: 0.9710 - val_loss: 0.0583 - val_accuracy: 0.9724
Epoch 11/1000
33143/33143 [==============================] - 148s 4ms/step - loss: 0.0603 - accuracy: 0.9711 - val_loss: 0.0583 - val_accuracy: 0.9724
Epoch 12/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0602 - accuracy: 0.9712 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 13/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0601 - accuracy: 0.9712 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 14/1000
33143/33143 [==============================] - 148s 4ms/step - loss: 0.0601 - accuracy: 0.9712 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 15/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 16/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9725
Epoch 17/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9725
Epoch 18/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 19/1000
33143/33143 [==============================] - 145s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9724
Epoch 20/1000
33143/33143 [==============================] - 144s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 21/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 22/1000
33143/33143 [==============================] - 144s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9724
Epoch 23/1000
33143/33143 [==============================] - 146s 4ms/step - loss: 0.0600 - accuracy: 0.9713 - val_loss: 0.0582 - val_accuracy: 0.9724
Epoch 24/1000
33143/33143 [==============================] - 145s 4ms/step - loss: 0.0599 - accuracy: 0.9714 - val_loss: 0.0581 - val_accuracy: 0.9725
Epoch 25/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9714 - val_loss: 0.0581 - val_accuracy: 0.9724
Epoch 26/1000
33143/33143 [==============================] - 147s 4ms/step - loss: 0.0599 - accuracy: 0.9713 - val_loss: 0.0581 - val_accuracy: 0.9724
Trial 1 Complete [01h 16m 38s]
val_accuracy: 0.9724819660186768

Best val_accuracy So Far: 0.9724819660186768
Total elapsed time: 01h 16m 38s
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
2023-11-29 21:23:57.450991: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451029: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451059: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451091: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451123: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451157: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451185: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451213: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
2023-11-29 21:23:57.451250: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
Traceback (most recent call last):
  File "cnn_search_by_chunk.py", line 50, in <module>
    print(clf.evaluate(validation_dataset))
  File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/tasks/structured_data.py", line 187, in evaluate
    return super().evaluate(x=x, y=y, **kwargs)
  File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/auto_model.py", line 492, in evaluate
    return utils.evaluate_with_adaptive_batch_size(
  File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 68, in evaluate_with_adaptive_batch_size
    return run_with_adaptive_batch_size(
  File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 101, in run_with_adaptive_batch_size
    history = func(x=x, validation_data=validation_data, **fit_kwargs)
  File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 70, in <lambda>
    lambda x, validation_data, **kwargs: model.evaluate(
  File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/my_user_name/.local/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.FailedPreconditionError: Graph execution error:

Detected at node 'model/multi_category_encoding/string_lookup_15/None_Lookup/LookupTableFindV2' defined at (most recent call last):
    File "cnn_search_by_chunk.py", line 50, in <module>
      print(clf.evaluate(validation_dataset))
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/tasks/structured_data.py", line 187, in evaluate
      return super().evaluate(x=x, y=y, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/auto_model.py", line 492, in evaluate
      return utils.evaluate_with_adaptive_batch_size(
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 68, in evaluate_with_adaptive_batch_size
      return run_with_adaptive_batch_size(
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 101, in run_with_adaptive_batch_size
      history = func(x=x, validation_data=validation_data, **fit_kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/utils/utils.py", line 70, in <lambda>
      lambda x, validation_data, **kwargs: model.evaluate(
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 2200, in evaluate
      logs = test_function_runner.run_step(
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 4000, in run_step
      tmp_logs = self._function(dataset_or_iterator)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1972, in test_function
      return step_function(self, iterator)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1956, in step_function
      outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1944, in run_step
      outputs = model.test_step(data)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 1850, in test_step
      y_pred = self(x, training=False)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/training.py", line 569, in __call__
      return super().__call__(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/base_layer.py", line 1150, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 96, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/functional.py", line 512, in call
      return self._run_internal_graph(inputs, training=training, mask=mask)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/functional.py", line 669, in _run_internal_graph
      outputs = node.layer(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/base_layer.py", line 1150, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 96, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/keras_layers.py", line 91, in call
      for input_node, encoding_layer in zip(split_inputs, self.encoding_layers):
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/keras_layers.py", line 92, in call
      if encoding_layer is None:
    File "/home/my_user_name/.local/lib/python3.8/site-packages/autokeras/keras_layers.py", line 100, in call
      output_nodes.append(
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/engine/base_layer.py", line 1150, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 96, in error_handler
      return fn(*args, **kwargs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/layers/preprocessing/index_lookup.py", line 756, in call
      lookups = self._lookup_dense(inputs)
    File "/home/my_user_name/.local/lib/python3.8/site-packages/keras/src/layers/preprocessing/index_lookup.py", line 792, in _lookup_dense
      lookups = self.lookup_table.lookup(inputs)
Node: 'model/multi_category_encoding/string_lookup_15/None_Lookup/LookupTableFindV2'
Table not initialized.
         [[]] [Op:__inference_test_function_5785123]
2023-11-29 21:23:57.618149: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
         [[]]
2023-11-29 21:23:57.618266: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
         [[]]
2023-11-29 21:23:57.618360: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
         [[]]
2023-11-29 21:23:57.618434: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
         [[]]
my_user_name@192:~/my_project_name_v2$


from Why is this AutoKeras NAS failing?

Monday 4 December 2023

svelte: Issue with API call staying pending when accessing application via IP address or hostname, server response as stream data, chat-UI

Title: Issue with API call staying pending when accessing application via IP address

Description:

I have forked the chat-ui project and made several changes, including Azure AD integration, OpenAI API compatible serving layer support, and making it more container-friendly. The application works fine on localhost, but when I try to access it via an IP address, I encounter an issue.

Problem:

The backend code provides data as a stream to the frontend using the POST method. On localhost, everything works as expected, but when accessing the application via an IP address, the API call from the network stays pending until it reaches component.close() in the frontend. The issue seems to be related to the stream not being processed properly.

Backend Code (excerpt):

export async function POST({ request, locals, params, getClientAddress }) {
    const id = z.string().parse(params.id);
    const convId = new ObjectId(id);
    const promptedAt = new Date();

    const userId = locals.user?._id ?? locals.sessionId;

    // check user
    if (!userId) {
        throw error(401, "Unauthorized");
    }
    console.log("post", {userId, params, ip: getClientAddress()})

    // check if the user has access to the conversation
    const conv = await collections.conversations.findOne({
        _id: convId,
        ...authCondition(locals),
    });

    if (!conv) {
        throw error(404, "Conversation not found");
    }

    // register the event for ratelimiting
    await collections.messageEvents.insertOne({
        userId: userId,
        createdAt: new Date(),
        ip: getClientAddress(),
    });

    // guest mode check
    if (
        !locals.user?._id &&
        requiresUser &&
        (MESSAGES_BEFORE_LOGIN ? parseInt(MESSAGES_BEFORE_LOGIN) : 0) > 0
    ) {
        const totalMessages =
            (
                await collections.conversations
                    .aggregate([
                        { $match: authCondition(locals) },
                        { $project: { messages: 1 } },
                        { $unwind: "$messages" },
                        { $match: { "messages.from": "assistant" } },
                        { $count: "messages" },
                    ])
                    .toArray()
            )[0]?.messages ?? 0;

        if (totalMessages > parseInt(MESSAGES_BEFORE_LOGIN)) {
            throw error(429, "Exceeded number of messages before login");
        }
    }

    // check if the user is rate limited
    const nEvents = Math.max(
        await collections.messageEvents.countDocuments({ userId }),
        await collections.messageEvents.countDocuments({ ip: getClientAddress() })
    );

    if (RATE_LIMIT != "" && nEvents > parseInt(RATE_LIMIT)) {
        throw error(429, ERROR_MESSAGES.rateLimited);
    }

    // fetch the model
    const model = models.find((m) => m.id === conv.model);

    if (!model) {
        throw error(410, "Model not available anymore");
    }

    // finally parse the content of the request
    const json = await request.json();

    const {
        inputs: newPrompt,
        response_id: responseId,
        id: messageId,
        is_retry,
        web_search: webSearch,
    } = z
        .object({
            inputs: z.string().trim().min(1),
            id: z.optional(z.string().uuid()),
            response_id: z.optional(z.string().uuid()),
            is_retry: z.optional(z.boolean()),
            web_search: z.optional(z.boolean()),
        })
        .parse(json);

    // get the list of messages
    // while checking for retries
    let messages = (() => {
        if (is_retry && messageId) {
            // if the message is a retry, replace the message and remove the messages after it
            let retryMessageIdx = conv.messages.findIndex((message) => message.id === messageId);
            if (retryMessageIdx === -1) {
                retryMessageIdx = conv.messages.length;
            }
            return [
                ...conv.messages.slice(0, retryMessageIdx),
                { content: newPrompt, from: "user", id: messageId as Message["id"], updatedAt: new Date() },
            ];
        } // else append the message at the bottom

        return [
            ...conv.messages,
            {
                content: newPrompt,
                from: "user",
                id: (messageId as Message["id"]) || crypto.randomUUID(),
                createdAt: new Date(),
                updatedAt: new Date(),
            },
        ];
    })() satisfies Message[];

    await collections.conversations.updateOne(
        {
            _id: convId,
        },
        {
            $set: {
                messages,
                title: conv.title,
                updatedAt: new Date(),
            },
        }
    );

    // we now build the stream
    const stream = new ReadableStream({
        async start(controller) {
            const updates: MessageUpdate[] = [];

            function update(newUpdate: MessageUpdate) {
                if (newUpdate.type !== "stream") {
                    updates.push(newUpdate);
                }
                controller.enqueue(JSON.stringify(newUpdate) + "\n");
            }

            update({ type: "status", status: "started" });

            if (conv.title === "New Chat" && messages.length === 1) {
                try {
                    conv.title = (await summarize(newPrompt)) ?? conv.title;
                    update({ type: "status", status: "title", message: conv.title });
                } catch (e) {
                    console.error(e);
                }
            }

            await collections.conversations.updateOne(
                {
                    _id: convId,
                },
                {
                    $set: {
                        messages,
                        title: conv.title,
                        updatedAt: new Date(),
                    },
                }
            );

            let webSearchResults: WebSearch | undefined;

            if (webSearch) {
                webSearchResults = await runWebSearch(conv, newPrompt, update);
            }

            messages[messages.length - 1].webSearch = webSearchResults;

            conv.messages = messages;

            const endpoint = await model.getEndpoint();

            for await (const output of await endpoint({ conversation: conv })) {
                // if not generated_text is here it means the generation is not done
                if (!output.generated_text) {
                    // else we get the next token
                    if (!output.token.special) {
                        update({
                            type: "stream",
                            token: output.token.text,
                        });

                        // if the last message is not from assistant, it means this is the first token
                        const lastMessage = messages[messages.length - 1];

                        if (lastMessage?.from !== "assistant") {
                            // so we create a new message
                            messages = [
                                ...messages,
                                // id doesn't match the backend id but it's not important for assistant messages
                                // First token has a space at the beginning, trim it
                                {
                                    from: "assistant",
                                    content: output.token.text.trimStart(),
                                    webSearch: webSearchResults,
                                    updates: updates,
                                    id: (responseId as Message["id"]) || crypto.randomUUID(),
                                    createdAt: new Date(),
                                    updatedAt: new Date(),
                                },
                            ];
                        } else {
                            // abort check
                            const date = abortedGenerations.get(convId.toString());
                            if (date && date > promptedAt) {
                                break;
                            }

                            if (!output) {
                                break;
                            }

                            // otherwise we just concatenate tokens
                            lastMessage.content += output.token.text;
                        }
                    }
                } else {
                    // add output.generated text to the last message
                    messages = [
                        ...messages.slice(0, -1),
                        {
                            ...messages[messages.length - 1],
                            content: output.generated_text,
                            updates: updates,
                            updatedAt: new Date(),
                        },
                    ];
                }
            }

            await collections.conversations.updateOne(
                {
                    _id: convId,
                },
                {
                    $set: {
                        messages,
                        title: conv?.title,
                        updatedAt: new Date(),
                    },
                }
            );

            update({
                type: "finalAnswer",
                text: messages[messages.length - 1].content,
            });
            controller.close();
        },
        async cancel() {
            await collections.conversations.updateOne(
                {
                    _id: convId,
                },
                {
                    $set: {
                        messages,
                        title: conv.title,
                        updatedAt: new Date(),
                    },
                }
            );
        },
    });

    // Todo: maybe we should wait for the message to be saved before ending the response - in case of errors
    return new Response(stream, {
        headers: {
            "Content-Type": "application/x-ndjson",
        },
    });
}

Frontend Code (excerpt):

async function writeMessage(message: string, messageId = randomUUID()) {
        if (!message.trim()) return;

        try {
            isAborted = false;
            loading = true;
            pending = true;

            // first we check if the messageId already exists, indicating a retry

            let retryMessageIndex = messages.findIndex((msg) => msg.id === messageId);
            const isRetry = retryMessageIndex !== -1;
            // if it's not a retry we just use the whole array
            if (!isRetry) {
                retryMessageIndex = messages.length;
            }

            // slice up to the point of the retry
            messages = [
                ...messages.slice(0, retryMessageIndex),
                { from: "user", content: message, id: messageId },
            ];

            const responseId = randomUUID();

            const response = await fetch(`${base}/conversation/${$page.params.id}`, {
                method: "POST",
                headers: { "Content-Type": "application/json" },
                body: JSON.stringify({
                    inputs: message,
                    id: messageId,
                    response_id: responseId,
                    is_retry: isRetry,
                    web_search: $webSearchParameters.useSearch,
                }),
            });

            if (!response.body) {
                throw new Error("Body not defined");
            }

            if (!response.ok) {
                error.set((await response.json())?.message);
                return;
            }
            // eslint-disable-next-line no-undef
            const encoder = new TextDecoderStream();
            const reader = response?.body?.pipeThrough(encoder).getReader();
            let finalAnswer = "";

            // this is a bit ugly
            // we read the stream until we get the final answer
            while (finalAnswer === "") {
                await new Promise((r) => setTimeout(r, 25));

                // check for abort
                if (isAborted) {
                    reader?.cancel();
                    break;
                }

                // if there is something to read
                await reader?.read().then(async ({ done, value }) => {
                    // we read, if it's done we cancel
                    if (done) {
                        reader.cancel();
                        return;
                    }

                    if (!value) {
                        return;
                    }

                    // if it's not done we parse the value, which contains all messages
                    const inputs = value.split("\n");
                    inputs.forEach(async (el: string) => {
                        try {
                            const update = JSON.parse(el) as MessageUpdate;
                            if (update.type === "finalAnswer") {
                                finalAnswer = update.text;
                                reader.cancel();
                                invalidate(UrlDependency.Conversation);
                            } else if (update.type === "stream") {
                                pending = false;

                                let lastMessage = messages[messages.length - 1];

                                if (lastMessage.from !== "assistant") {
                                    messages = [
                                        ...messages,
                                        { from: "assistant", id: randomUUID(), content: update.token },
                                    ];
                                } else {
                                    lastMessage.content += update.token;
                                    messages = [...messages];
                                }
                            } else if (update.type === "webSearch") {
                                webSearchMessages = [...webSearchMessages, update];
                            } else if (update.type === "status") {
                                if (update.status === "title" && update.message) {
                                    const conv = data.conversations.find(({ id }) => id === $page.params.id);
                                    if (conv) {
                                        conv.title = update.message;

                                        $titleUpdate = {
                                            title: update.message,
                                            convId: $page. params.id,
                                        };
                                    }
                                }
                            }
                        } catch (parseError) {
                            // in case of parsing error we wait for the next message
                            return;
                        }
                    });
                });
            }

            // reset the websearchmessages
            webSearchMessages = [];

            await invalidate(UrlDependency.ConversationList);
        } catch (err) {
            if (err instanceof Error && err.message.includes("overloaded")) {
                $error = "Too much traffic, please try again.";
            } else if (err instanceof Error && err.message.includes("429")) {
                $error = ERROR_MESSAGES.rateLimited;
            } else if (err instanceof Error) {
                $error = err.message;
            } else {
                $error = ERROR_MESSAGES.default;
            }
            console.error(err);
        } finally {
            loading = false;
            pending = false;
        }
    }

Steps to Reproduce:

  1. Fork the chat-ui project.
  2. Make the specified changes related to Azure AD integration, OpenAI API, and container support.
  3. Run the application on localhost and access it via an IP address.
  4. Observe the behavior where the API call stays pending until component.close() is reached.

Expected Behavior:

The application should behave consistently whether accessed via localhost or an IP address or hostname. The API call should not stay pending, and the stream should be processed correctly.

Additional Information:

  • before Adding component.close() in the stream code the windows machine was not at all returning the response and at the end it was saying promise uncaught

  • Network requests and responses from the browser's developer tools. Local host starts immediately and waterfall of api start getting response and stream is wring at FE correctly : enter image description here With IP it stays in pending state and when final message arrives at that time it writes everything at once
    enter image description here server LOGs: enter image description here you can see the difference when ip ::1 it starts writing at FE and processing stream data, when IP '::ffff:10.10.100.106' it stays in pending until the stream generation is completed i assume

  • Any specific configurations or dependencies related to hosting the application via an IP address.

Environment:

  • Operating System: it is working fine on mac but facing issues in windows
  • Browser: chrome
  • Node.js version: >18
  • Any other relevant environment details.

Note: Please let me know if additional code snippets or information are needed. Thanks for your help!


Feel free to customize the template based on your specific situation and provide any additional details that might be relevant to the issue.



from svelte: Issue with API call staying pending when accessing application via IP address or hostname, server response as stream data, chat-UI

Upload file with drag & drop using selenium python

I have developed a Python program using Selenium to upload files through drag and drop. In the actual scenario, the user drags the file from their system, and after dropping it onto the site's page, an element with the structure "//div[contains(@class, 'drops-container')]" appears after a short delay. The user then needs to release the file inside this element for the upload to take place.

I am trying to replicate this process using Selenium in Python, please help me



from Upload file with drag & drop using selenium python

Failing to establish an SSH tunnel in Python (Without Putty)

Hello Guys Happy Holidays!

I hope all of you are doing well. I'm attempting to automate the process of connecting to a Redshift server using Python without relying on PuTTY. Currently, I'm on a Windows machine, and I need to extract data from PostgreSQL on a Redshift server. However, to achieve this, I have to:

  1. Open the PuTTY .exe

  2. Enter this command in PuTTY: "Putty -P <port_number> -noagent -N -L 5534:<redshift_host>:5534 <username>@<remote_host> -i <private_key_file> -pw <password>"

  3. Wait a few seconds until PuTTY shows the tunnel is open

  4. Open my Jupyter Python Notebook and finally execute my query:

    cxn= psycopg2.connect(user="sql_username", password="sql_password", host="host_ip", port=5534, database="database_name")

Extract the data and store it as a dataframe. Since this is quite a manual and not so efficient process, I have been searching the web to stop using PuTTY altogether and find a new way to create the tunnel and extract my data. I have even converted my .ppk key to a .pem format to use with other libraries. I'm using paramiko and SSHTunnelForwarder, but I have not been successful in actually connecting correctly to my tunnel. Here is my code:

from sshtunnel import SSHTunnelForwarder


ssh_host = <remote_host>
ssh_port = <port_number>
ssh_user = <username>
ssh_key_path = 'ssh_key_redshift.pem'  
ssh_password = <password>

redshift_host = <redshift_host>
redshift_port = 5534
redshift_user = <username>


# Create an SSH tunnel
with SSHTunnelForwarder(
    (ssh_host, ssh_port),
    ssh_username=ssh_user,
    ssh_pkey=ssh_key_path,
    ssh_password=ssh_password,
    remote_bind_address=(redshift_host, redshift_port),
    local_bind_address=('localhost', 5534)
) as tunnel:
    print("SSH Tunnel established successfully.")
    input("Press Enter to close the tunnel...") 

But unfortnally is not workig to open and connect the tunnel and when I use shhtunnel

I have heard of the paramiko library, and I would be thrilled if anyone could assist me with this. Essentially, what I need to do is establish an SSH tunnel using <port_number>, binding the local port 5534 to a Redshift host's port 5534, using the credentials and the key file that I have converted to .pem.

I am a very attentive and active user I will be reading all of your comments and recomendations to choose the answer that can allow me to end this shh suffering



from Failing to establish an SSH tunnel in Python (Without Putty)

Sunday 3 December 2023

How to use a dropdown widget to highlight selected categorical variable in stacked bar chart?

I am learning matplotlib and ipywidgets and attempt to design an interactive bar chart, such that the selected category can be highlighted.

Data Example

Assuming I have a dataframe:

import pandas as pd
import matplotlib.pyplot as plot

data = {"Production":[10000, 12000, 14000],
        "Sales":[9000, 10500, 12000]}
index = ["2017", "2018", "2019"]

df = pd.DataFrame(data=data, index=index)
df.plot.bar(stacked=True,rot=15, title="Annual Production Vs Annual Sales")

The resulting stacked bar chart looks like below:

enter image description here

What I am after

If we select production in the dropdown list, the blue bars will be highlighted by adding a box (or a frame) surrounding it. Similar should happen to Sales if it is selected.

Question

I am not sure if ipywidgets and matplotlib are enough to fulfill this feature, or do we need other package to make it? If possible to do with those two packages, could anyone share some clues? Thanks!



from How to use a dropdown widget to highlight selected categorical variable in stacked bar chart?

Saturday 2 December 2023

Fit Text to Circle (With Scaling) in HTML Canvas, while Typing, with React

I'm trying to have text fit a circle while typing, something like this:

Example Image 1

I've tried following Mike Bostock's tutorial, but failed so far, here's my pitiful attempt:

import React, { useEffect, useRef, useState } from "react";

export const TwoPI = 2 * Math.PI;

export function setupGridWidthHeightAndScale(
  width: number,
  height: number,
  canvas: HTMLCanvasElement
) {
  canvas.style.width = width + "px";
  canvas.style.height = height + "px";

  // Otherwise we get blurry lines
  // Referenece: [Stack Overflow - Canvas drawings, like lines, are blurry](https://stackoverflow.com/a/59143499/4756173)
  const scale = window.devicePixelRatio;

  canvas.width = width * scale;
  canvas.height = height * scale;

  const canvasCtx = canvas.getContext("2d")!;

  canvasCtx.scale(scale, scale);
}


type CanvasProps = {
  width: number;
  height: number;
};

export function TextInCircle({
  width,
  height,
}: CanvasProps) {
  const [text, setText] = useState("");

  const canvasRef = useRef<HTMLCanvasElement>(null);

  function getContext() {
    const canvas = canvasRef.current!;
    return canvas.getContext("2d")!;
  }

  useEffect(() => {
    const canvas = canvasRef.current!;
    setupGridWidthHeightAndScale(width, height, canvas);

    const ctx = getContext();

    // Background
    ctx.fillStyle = "black";
    ctx.fillRect(0, 0, width, height);

    // Circle
    ctx.beginPath();
    ctx.arc(width / 2, height / 2, 100, 0, TwoPI);
    ctx.closePath();

    // Fill the Circle
    ctx.fillStyle = "white";
    ctx.fill();
  }, [width, height]);

  function handleChange(
    e: React.ChangeEvent<HTMLInputElement>
  ) {
    const newText = e.target.value;
    setText(newText);

    // Split Words
    const words = text.split(/\s+/g); // To hyphenate: /\s+|(?<=-)/
    if (!words[words.length - 1]) words.pop();
    if (!words[0]) words.shift();

    // Get Width
    const lineHeight = 12;
    const targetWidth = Math.sqrt(
      measureWidth(text.trim()) * lineHeight
    );

    // Split Lines accordingly
    const lines = splitLines(targetWidth, words);

    // Get radius so we can scale
    const radius = getRadius(lines, lineHeight);

    // Draw Text
    const ctx = getContext();

    ctx.textAlign = "center";
    ctx.fillStyle = "black";
    for (const [i, l] of lines.entries()) {
      // I'm totally lost as to how to proceed here...
      ctx.fillText(
        l.text,
        width / 2 - l.width / 2,
        height / 2 + i * lineHeight
      );
    }
  }

  function measureWidth(s: string) {
    const ctx = getContext();
    return ctx.measureText(s).width;
  }

  function splitLines(
    targetWidth: number,
    words: string[]
  ) {
    let line;
    let lineWidth0 = Infinity;
    const lines = [];

    for (let i = 0, n = words.length; i < n; ++i) {
      let lineText1 =
        (line ? line.text + " " : "") + words[i];

      let lineWidth1 = measureWidth(lineText1);

      if ((lineWidth0 + lineWidth1) / 2 < targetWidth) {
        line!.width = lineWidth0 = lineWidth1;
        line!.text = lineText1;
      } else {
        lineWidth0 = measureWidth(words[i]);
        line = { width: lineWidth0, text: words[i] };
        lines.push(line);
      }
    }
    return lines;
  }

  function getRadius(
    lines: { width: number; text: string }[],
    lineHeight: number
  ) {
    let radius = 0;

    for (let i = 0, n = lines.length; i < n; ++i) {
      const dy =
        (Math.abs(i - n / 2 + 0.5) + 0.5) * lineHeight;

      const dx = lines[i].width / 2;

      radius = Math.max(
        radius,
        Math.sqrt(dx ** 2 + dy ** 2)
      );
    }

    return radius;
  }

  return (
    <>
      <input type="text" onChange={handleChange} />

      <canvas ref={canvasRef}></canvas>
    </>
  );
}

I've also tried to follow @markE's answer from 2013. But the text doesn't seem to be made to scale with the circle's radius, it's the other way around in that example, with the radius being scaled to fit the text, as far as I was able to understand. And, for some reason, changing the example text yields a text is undefined error, I have no idea why.

import React, { useEffect, useRef, useState } from "react";

export const TwoPI = 2 * Math.PI;

export function setupGridWidthHeightAndScale(
  width: number,
  height: number,
  canvas: HTMLCanvasElement
) {
  canvas.style.width = width + "px";
  canvas.style.height = height + "px";

  // Otherwise we get blurry lines
  // Referenece: [Stack Overflow - Canvas drawings, like lines, are blurry](https://stackoverflow.com/a/59143499/4756173)
  const scale = window.devicePixelRatio;

  canvas.width = width * scale;
  canvas.height = height * scale;

  const canvasCtx = canvas.getContext("2d")!;

  canvasCtx.scale(scale, scale);
}

type CanvasProps = {
  width: number;
  height: number;
};

export function TextInCircle({
  width,
  height,
}: CanvasProps) {
  const [typedText, setTypedText] = useState("");

  const canvasRef = useRef<HTMLCanvasElement>(null);

  function getContext() {
    const canvas = canvasRef.current!;
    return canvas.getContext("2d")!;
  }

  useEffect(() => {
    const canvas = canvasRef.current!;
    setupGridWidthHeightAndScale(width, height, canvas);
  }, [width, height]);

  const textHeight = 15;
  const lineHeight = textHeight + 5;
  const cx = 150;
  const cy = 150;
  const r = 100;

  function handleChange(
    e: React.ChangeEvent<HTMLInputElement>
  ) {
    const ctx = getContext();

    const text = e.target.value; // This gives out an error
    // "'Twas the night before Christmas, when all through the house,  Not a creature was stirring, not even a mouse.  And so begins the story of the day of";

    const lines = initLines();
    wrapText(text, lines);

    ctx.beginPath();
    ctx.arc(cx, cy, r, 0, Math.PI * 2, false);
    ctx.closePath();
    ctx.strokeStyle = "skyblue";
    ctx.lineWidth = 2;
    ctx.stroke();
  }

  // pre-calculate width of each horizontal chord of the circle
  // This is the max width allowed for text

  function initLines() {
    const lines: any[] = [];

    for (let y = r * 0.9; y > -r; y -= lineHeight) {
      let h = Math.abs(r - y);

      if (y - lineHeight < 0) {
        h += 20;
      }

      let length = 2 * Math.sqrt(h * (2 * r - h));

      if (length && length > 10) {
        lines.push({
          y: y,
          maxLength: length,
        });
      }
    }

    return lines;
  }

  // draw text on each line of the circle

  function wrapText(text: string, lines: any[]) {
    const ctx = getContext();

    let i = 0;
    let words = text.split(" ");

    while (i < lines.length && words.length > 0) {
      let line = lines[i++];

      let lineData = calcAllowableWords(
        line.maxLength,
        words
      );

      ctx.fillText(
        lineData!.text,
        cx - lineData!.width / 2,
        cy - line.y + textHeight
      );

      words.splice(0, lineData!.count);
    }
  }

  // calculate how many words will fit on a line

  function calcAllowableWords(
    maxWidth: number,
    words: any[]
  ) {
    const ctx = getContext();

    let wordCount = 0;
    let testLine = "";
    let spacer = "";
    let fittedWidth = 0;
    let fittedText = "";

    const font = "12pt verdana";
    ctx.font = font;

    for (let i = 0; i < words.length; i++) {
      testLine += spacer + words[i];
      spacer = " ";

      let width = ctx.measureText(testLine).width;

      if (width > maxWidth) {
        return {
          count: i,
          width: fittedWidth,
          text: fittedText,
        };
      }

      fittedWidth = width;
      fittedText = testLine;
    }
  }

  return (
    <>
      <input type="text" onChange={handleChange} />

      <canvas ref={canvasRef}></canvas>
    </>
  );
}


from Fit Text to Circle (With Scaling) in HTML Canvas, while Typing, with React

Bring draggable div to the front in React.js when user clicks on it

I want to bring a draggable box to the front when I click on it or drag it.

I don't know in advance the maximum number of such boxes because I have a button that creates a new draggable box when the user clicks on the button.

The button should always be on top no matter how many boxes there are, so its z-index should always be the greatest.

So these are my problems:

  1. How to bring a box to the front when the user clicks on it.
  2. How to make the button stay at the front.

And I am new to ReactJS.

This is what I currently have in MyComponent.js

import React, { useState }  from 'react';
import Draggable from 'react-draggable';

function MyComponent() {
    const [currZIndex, setZIndex] = useState(0);

    const bringForward = () => {
        setZIndex(currZIndex + 1);
    }

    return (
        <Draggable onMouseDown={bringForward}>
            <div className="mydiv" style=></div>
        </Draggable>
    );
}

The problem of my current implementation is that each component knows only its own z-index, but does not know what is the current highest z-index. And z-index increases whenever I click on any div, so this makes the z-indexes unnecessarily large.

If div1 has a z-index of 6 and div2 has a z-index of 2, I have to click div2 5 times to make its z-index become 7 in order to bring div2 to the front.

I still haven't come up with an idea on how to deal with the z-index of the button.

Fyr this is what I have in App.js

import React, { useState } from 'react';
import MyComponent from './MyComponent';

function App() {
  const [componentList, setComponentList] = useState([]);

  function addNewComponent() {
    setComponentList(componentList.concat(<MyComponent key={componentList.length} />));
  }

  return (
    <div>
      <button onClick={addNewComponent}>New Component</button>
      {componentList}
    </div>
  );
}


from Bring draggable div to the front in React.js when user clicks on it

how to pass an assertion in if condition using cypress without halting the execution in case of assertion failure

I am trying to pass an assertion to if condition and execute a logic when the condition is met and another logic when condition is failed.

Since the test is failing on failure of assertion i am not able to achieve the desired result.

I tried the following...

if(cy.get("div").length\>0)

{

cy.log("print this")

}

else

{

cy.log("print this")

}

or

if(cy.get("div").should('have.length.greaterThan',0)

{

cy.log("print this")

}

else

{

cy.log("print this")

}


from how to pass an assertion in if condition using cypress without halting the execution in case of assertion failure