Saturday, 30 July 2022

Android 11 - Webrtc VOIP calls are not working with browser or desktop app. Same is working with Android 10

Good day team!!!

I am developing a calling system, In this case we are working on Android, IOS and web/Desktop platforms.

Audio and Video calls were working fine for Android, IOS and Web/Desktop. Until I have upgraded the Android API level to 30 (Android 11, as its mandatory to update app on to Play Store). Once I switch API level to 30. Android to Android, Android to IOS audio/video calls are working fine.

But Android to Web and vice versa is not working.

I am getting ICE server state as CHECKING whenever I am try to connect. After waiting for sometimes its going in to FAIL state.

If I switch Android API level to 29 again, everything is working fine.

What could be the issue with new Android API level 30 for VOIP calls using WEBRTC. I am using implementation "io.pristine:libjingle:11139@aar" as lib dependency for WebRTC.

Anyone come across this issue? Please let me know I am stuck with this part.

I am not able to share the code due to huge code for this, If needed specific code, let me know in comment I will provide that. Hope you can understand.

Thank you so much in advance



from Android 11 - Webrtc VOIP calls are not working with browser or desktop app. Same is working with Android 10

Flask Load New Page After Streaming Data

I have simple Flask App that takes a CSV upload, makes some changes, and streams the results back to the user's download folder as CSV.

HTML Form

<form action =  method = "POST" enctype = "multipart/form-data">
    <label>CSV file</label><br>
    <input type = "file" name = "input_file" required></input><br><br>
    <!-- some other inputs -->
    <div id = "submit_btn_container">
        <input id="submit_btn" onclick = "this.form.submit(); this.disabled = true; this.value = 'Processing';" type = "submit"></input>
    </div>
</form>

PYTHON

from flask import Flask, request, Response, redirect, flash, render_template
from io import BytesIO
import pandas as pd

@app.route('/uploader', methods = ['POST'])
def uploadFile():
    uploaded_file = request.files['input_file']
    data_df = pd.read_csv(BytesIO(uploaded_file.read()))
    # do stuff
    
    # stream the pandas df as a csv to the user download folder
    return Response(data_df.to_csv(index = False),
                            mimetype = "text/csv",
                            headers = {"Content-Disposition": "attachment; filename=result.csv"})

This works great and I see the file in my downloads folder.

However, I'd like to display "Download Complete" page after it finishes.

How can I do this? Normally I use return redirect("some_url") to change pages.



from Flask Load New Page After Streaming Data

How to dispatch multi touch gestures using AccessibilityService (disptachGesture)

As we know that from Android 10 onwards, the O.S. supports **multi finger gestures **. I want to develop an app that dispatches complex gesture for user. I am able to capture the motion events and dispatch gestures which are made only of one finger.

But if the user uses multiple pointers (fingers) for making a gesture, then I am able to capture them but then how can I dispatch them using Accessibility Service (dispatchGesture) function.

Any help would be most welcomed. Thanks



from How to dispatch multi touch gestures using AccessibilityService (disptachGesture)

How do I make may Android TV App relaunch after a crash

I'm building a corporate app for android TV that I need to have always in the foreground. Every now and then the app will crash along with the service that will relaunch it. Is there a best practice to ensure the app is always running. What I can't figure out is how to launch after a force stop. The app can be side loaded so we don't have to worry about App Store approval.

The problem is when I use a service worker it will also die since it is attached to the original process https://developer.android.com/reference/android/app/Service

Same issue with the https://developer.android.com/topic/libraries/architecture/workmanager

Any ideas on an approach to basically check if the app is running and if it isn't start it up ? Is there any other event that I can hook into to launch the app ?



from How do I make may Android TV App relaunch after a crash

Images not showing with Expo/React Native and Next

So I am trying to use the Native Base Solito starter:

https://github.com/GeekyAnts/nativebase-templates/tree/master/solito-universal-app-template-nativebase-typescript

This is the first time I've tried to work with Next, and I am trying to get image support with Expo.

Per the Expo documentation:

https://docs.expo.dev/guides/using-nextjs/

I should be able to just use next-image, which I am doing:

const { withNativebase } = require('@native-base/next-adapter')
const withImages = require('next-images')

module.exports = withNativebase(
  withImages({
    dependencies: [
      '@expo/next-adapter',
      'next-images',
      'react-native-vector-icons',
      'react-native-vector-icons-for-web',
      'solito',
      'app',
    ],
    nextConfig: {
      projectRoot: __dirname,
      reactStrictMode: true,
      webpack5: true,
      webpack: (config, options) => {
        config.resolve.alias = {
          ...(config.resolve.alias || {}),
          'react-native$': 'react-native-web',
          '@expo/vector-icons': 'react-native-vector-icons',
        }
        config.resolve.extensions = [
          '.web.js',
          '.web.ts',
          '.web.tsx',
          ...config.resolve.extensions,
        ]
        return config
      },
    },
  })
)

Despite this, my images are just not displaying in Next. Elements are generated with the styling I am applying to the image elements, but the images themselves are not displaying.

I tried both universal routing import and direct path:

import GrayBox from 'resources/images/graybox.png'
import Car from '../../../../packages/app/resources/images/car.png'

As well as several different image uses:

<Image
  source={require('../../../../packages/app/resources/images/car.png')}
  style=
  alt="test"
/>

<Image
  source={GrayBox}
  key={index}
  style=
  alt="test2"
/>

<Image
  source={Car}
  key={index}
  style=
  alt="test3"
/>

None of these images display.

I've tried both the react native image:

https://reactnative.dev/docs/image

As well as the native base wrapped one.

Still nothing.

Any clue what is wrong in my configuration to cause images to not show?

I suspect it's something in my next.config.js



from Images not showing with Expo/React Native and Next

Matching up the output of scipy linkage() and dendrogram()

I'm drawing dendrograms from scratch using the Z and P outputs of code like the following (see below for a fuller example):

Z = scipy.cluster.hierarchy.linkage(...)
P = scipy.cluster.hierarchy.dendrogram(Z, ..., no_plot=True)

and in order to do what I want, I need to match up a given index in P["icoord"]/P["dcoord"] (which contain the coordinates to draw the cluster linkage in a plot) with the corresponding index in Z (which contains the information about which data elements are in which cluster) or vice-versa. Unfortunately, it does not seem that in general, the position of clusters in P["icoord"]/P["dcoord"] just match up with the corresponding positions in Z (see the output of the code below for proof).

The Question: what is a way that I could match them up? I need either a function Z_i = f(P_coords_i) or its inverse P_coords_i = g(Z_i) so that I can iterate over one list and easily access the corresponding elements in the other.


The code below generates 26 random points and labels them with the letters of the alphabet and then prints out the letters corresponding with the clusters represented by the rows of Z and then the points in P where dcoord is zero (i.e. the leaf nodes), to prove that in general they don't match up: for example the first element of Z corresponds to cluster iu but the first set of points in P["icoord"]/P["dcoord"] corresponds to drawing the cluster for jy and that of iu doesn't come until a few elements later.

import numpy as np
from scipy.cluster import hierarchy
from scipy.spatial import distance
import string

# let's make some random data
np.random.seed(1)
data = np.random.multivariate_normal([0,0],[[5, 0], [0, 1]], 26)
letters = list(string.ascii_lowercase)
X = distance.pdist(data)


# here's the code I need to run for my use-case
Z = hierarchy.linkage(X)
P = hierarchy.dendrogram(Z, labels=letters, no_plot=True)


# let's look at the order of Z
print("Z:")

clusters = letters.copy()

for c1, c2, _, _ in Z:
    clusters.append(clusters[int(c1)]+clusters[int(c2)])
    print(clusters[-1])

# now let's look at the order of P["icoord"] and P["dcoord"]
print("\nP:")

def lookup(y, x):
    return "?" if y else P["ivl"][int((x-5)/10)]

for ((x1,x2,x3,x4),(y1,y2,y3,y4)) in zip(P["icoord"], P["dcoord"]):
     print(lookup(y1, x1)+lookup(y4, x4))

Output:

------Z:
iu
ez
niu
jy
ad
pr
bq
prbq
wniu
gwniu
ezgwniu
hm
ojy
prbqezgwniu
ks
ojyprbqezgwniu
vks
ojyprbqezgwniuvks
lhm
adlhm
fadlhm
cfadlhm
tcfadlhm
ojyprbqezgwniuvkstcfadlhm
xojyprbqezgwniuvkstcfadlhm

------P:
jy
o?
pr
bq
??
ez
iu
n?
w?
g?
??
??
??
ks
v?
??
ad
hm
l?
??
f?
c?
t?
??
x?


from Matching up the output of scipy linkage() and dendrogram()

Friday, 29 July 2022

Can I use pythonnet without .AddReference?

The usual way to integrate pythonnet in your project is the following:

import clr
clr.AddReference('My.Assembly')
import My.Assembly

My.Assembly.DoSomething()

What if I don't want the assembly namespace to be imported and be available globally. Is there any way to achieve something like this:

my_assembly = magic_loader('My.Assembly.dll')
my_assembly.DoSomething()


from Can I use pythonnet without .AddReference?

Will Nest.js dynamic cron jobs be deleted after a restart or shutdown?

So I developed a system with Nest.js which is able to create a dynamic cronjob from a user's input in the frontend application, I store this data in my database and at the same time I create the job in the server with the Dynamic schedule module API. Today I was wondering what would happen to my cronjobs if my server was shutdown or if it restarted itself, since my jobs aren't declarative and they are created at runtime I think that maybe when my server starts I should create the cronjobs again? I'm not sure if this get stored in memory or something since it's not in the documentation.

My concern, in fewer words, is:

Should I recreate my jobs using the information from the database once the server starts itself? Why yes or why not?



from Will Nest.js dynamic cron jobs be deleted after a restart or shutdown?

Customized Underline with gradient API in React

I want to recreate this Underline Effect from this Codepen with React and Typescript

The Codepen: https://codepen.io/krakruhahah/pen/jOzwXww

I think my issue in the code down below is the interface, I started to declare my types but still it does not recognize them. It says they are any. But max is declared as number but shows up as any still. I am unsure why. The functions are described as comments.

tsx:

import React from 'react';
import Typography from '@mui/material/Typography';
import { Box } from '@mui/material';

interface Props {
   max: number;
}

const styles = {
   body: {
       width: "80%",
       margin: "10vw auto",
     },

     heading: {
       fontFamily: "Playfair Display, serif",
       fontSize: "10vw",
     },
     
     "subheading": {
       fontFamily: "Open Sans, sans-serif",
       fontSize: "1em",
       lineHeight: "1.5",
     },
     
     "@media screen and (min-width: 40em)": {
       body: {
         width: "50%",
       },
       heading:{
         fontSize: "6vw",
       },
     
       subheading: {
         fontSize: "1.75vw",
       }
     },
     
     "underline--magical": {
       backgroundImage: "linear-gradient(120deg, #84fab0 0%, #8fd3f4 100%)",
       backgroundRepeat: "no-repeat",
       backgroundSize: "100% 0.2em",
       backgroundPosition: "0 88%",
       transition: "backgroundSize 0.25s ease-in",
       "&:hover": {
         backgroundSize: "100% 88%",
       },
     },
};

function Effect(props: Props) {

   // VARIABLES
const magicalUnderlines = Array.from(document.querySelectorAll('.underline--magical'));

const gradientAPI = 'https://gist.githubusercontent.com/wking-io/3e116c0e5675c8bcad8b5a6dc6ca5344/raw/4e783ce3ad0bcd98811c6531e40256b8feeb8fc8/gradient.json';

// HELPER FUNCTIONS

// 1. Get random number in range. Used to get random index from array.
const randNumInRange = max => Math.floor(Math.random() * (max - 1));

// 2. Merge two separate array values at the same index to 
// be the same value in new array.
const mergeArrays = (arrOne, arrTwo) => arrOne
 .map((item, i) => `${item} ${arrTwo[i]}`)
 .join(', ');

// 3. Curried function to add a background to array of elms
const addBackground = (elms) => (color) => {
 elms.forEach(el => {
   el.style.backgroundImage = color;
 });
}
// 4. Function to get data from API
const getData = async(url): Promise<XMLHttpRequest> => {
 const response = await fetch(url);
 const data = await response.json();
 return data.data;
}

// 5. Partial Application of addBackground to always apply 
// background to the magicalUnderlines constant
const addBackgroundToUnderlines = addBackground(magicalUnderlines);

// GRADIENT FUNCTIONS

// 1. Build CSS formatted linear-gradient from API data
const buildGradient = (obj) => `linear-gradient(${obj.direction}, ${mergeArrays(obj.colors, obj.positions)})`;

// 2. Get single gradient from data pulled in array and
// apply single gradient to a callback function
const applyGradient = async(url, callback): Promise<XMLHttpRequest> => {
 const data = await getData(url);
 const gradient = buildGradient(data[randNumInRange(data.length)]);
 callback(gradient);
}

// RESULT
applyGradient(gradientAPI, addBackgroundToUnderlines);
   return (
       <Box>
           <Typography sx={styles.heading}>
               Look At This <span style={styles['underline--magical']}>Pretty</span> Underline
           </Typography>
           <Typography sx={styles.subheading}>
               Wow this one is super incredibly cool, and this{' '}
               <span style={styles['underline--magical']}>one is on Multiple Lines!</span> I wish I had found this like thirty
               projects ago when I was representing the lollipop guild.
           </Typography>
       </Box>
   );
}
export { Effect };



from Customized Underline with gradient API in React

AuthCanceled at /oauth/complete/github/ (Authentication process canceled)

I'm working with a Django app and when I try to login with Github, this error occurs:

AuthCanceled at /oauth/complete/github/

Authentication process canceled

Request Method:     GET
Request URL:    https://my-site.com/oauth/complete/github/?code=xxxxxxxxxxxxxx&state=xxxxxxxxxxxxxx
Django Version:     3.0.5
Exception Type:     AuthCanceled
Exception Value:    Authentication process canceled

Exception Location: /usr/local/lib/python3.7/site-packages/social_core/utils.py in wrapper, line 254
Python Executable:  /usr/local/bin/python
Python Version:     3.7.2
Python Path:        ['/code',
                     '/usr/local/bin',
                     '/usr/local/lib/python37.zip',
                     '/usr/local/lib/python3.7',
                     '/usr/local/lib/python3.7/lib-dynload',
                     '/usr/local/lib/python3.7/site-packages']

I have set (correctly, I think) SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET on my settings.py (adding https://my-site.com/oauth/ as Authorization callback URL on https://github.com/settings/applications/XXXXX).

Any idea of where is the problem?


EDIT:

I used /oauth/complete/github because previously I had had this error:

Using the URLconf defined in my-site.urls, Django tried these URL patterns, in this order: 
1. home/ 
2. login/ [name='login'] 
3. logout/ [name='logout'] 
4. oauth/
...


from AuthCanceled at /oauth/complete/github/ (Authentication process canceled)

Thursday, 28 July 2022

get idxmax rolling for each group and each row?

data: https://github.com/zero-jack/data/blob/main/hy_data.csv#L7

Goal

  • get the idxmax from last n rows for each group.

Try

df=df.assign(
        l6d_highest_date=lambda x: x.groupby('hy_code')['high'].transform(lambda x: x.rolling(6).idxmax())


AttributeError: 'Rolling' object has no attribute 'idxmax'

notice: week_date is the index.



from get idxmax rolling for each group and each row?

How to replace cropped rectangle in opencv?

I have managed to cropped a bounding box with text, e.g. given this image:

enter image description here

I'm able to exact the following box:

enter image description here

with this code:

import re
import shutil

from IPython.display import Image

import requests
import pytesseract, cv2

"""https://www.geeksforgeeks.org/text-detection-and-extraction-using-opencv-and-ocr/"""
# Preprocessing the image starts
# Convert the image to gray scale
img = cv2.imread('img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Performing OTSU threshold
ret, thresh1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)

# Specify structure shape and kernel size.
# Kernel size increases or decreases the area
# of the rectangle to be detected.
# A smaller value like (10, 10) will detect
# each word instead of a sentence.
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (18, 18))

# Applying dilation on the threshold image
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1)

# Finding contours
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL,
                                                 cv2.CHAIN_APPROX_NONE)

# Creating a copy of image
im2 = img.copy()


for cnt in contours:
    x, y, w, h = cv2.boundingRect(cnt)
    # Drawing a rectangle on copied image
    rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
    # Cropping the text block for giving input to OCR
    cropped = im2[y:y + h, x:x + w]
    
cv2.imwrite('image-notxt.png', cropped)
Image(filename='image-notxt.png',  width=200)

Part 1: How do I replace the cropped box and put back a clear text box? e.g. to get something like:

enter image description here

I've tried:

    for cnt in contours:
        x, y, w, h = cv2.boundingRect(cnt)
        # Drawing a rectangle on copied image
        rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
        # Cropping the text block for giving input to OCR
        cropped = im2[y:y + h, x:x + w]
        text = pytesseract.image_to_string(cropped).strip('\x0c').strip()
        text = re.sub(' +', ' ', text.replace('\n', ' ')).strip()
        if text:
            # White out the cropped box.
            cropped.fill(255)
            # Create the image with the translation.
            cv2.putText(img=cropped, text="foobar", org=(12, 15), fontFace=cv2.FONT_HERSHEY_TRIPLEX, fontScale=0.3, color=(0, 0, 0),thickness=1)
            cv2.imwrite('image-notxt.png', cropped)
            Image(filename='image-notxt.png',  width=200)

That managed to white out the cropped box and insert the text like this:

enter image description here

Part 2: How to create an opencv textbox rectangle with the same size as the cropped box? e.g. given a string foobar, how to get the final image like this:

enter image description here



from How to replace cropped rectangle in opencv?

Checking if finger is over certain view not working in Android

I am working on a paint app with the following layout:

enter image description here

For the paint app, I detect touch events on the Canvas using onTouchEvent. I have one problem, I want to also detect touch events in which the user begins the swipe on the root and then hovers over the Canvas.

To achieve this, I added the following code:

binding.root.setOnTouchListener { _, motionEvent ->
    val hitRect = Rect()
    binding.activityCanvasCardView.getHitRect(hitRect)

    if (hitRect.contains(motionEvent.rawX.toInt(), motionEvent.rawY.toInt())) {
        binding.activityCanvasPixelGridView.onTouchEvent(motionEvent)
    }
    true
}

It kind of works, but the thing is. It's not detecting the touch events over the canvas (wrapped in a CardView) properly, it's like there's a sort of delay:

enter image description here

XML code:

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="@color/fragment_background_color_daynight"
    tools:context=".activities.canvas.CanvasActivity">
    <!-- This view is here to ensure that when the user zooms in, there is no overlap -->
    <View
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_topView"
        android:layout_width="0dp"
        android:layout_height="90dp"
        android:background="@color/fragment_background_color_daynight"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

    <!-- The ColorSwitcherView is a view I created which helps
         simplify the code for controlling the user's primary/secondary color -->
    <com.therealbluepandabear.pixapencil.customviews.colorswitcherview.ColorSwitcherView
        android:id="@+id/activityCanvas_colorSwitcherView"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_marginEnd="16dp"
        android:elevation="20dp"
        android:outlineProvider="none"
        app:isPrimarySelected="true"
        app:layout_constraintEnd_toEndOf="@+id/activityCanvas_topView"
        app:layout_constraintTop_toTopOf="@+id/activityCanvas_colorPickerRecyclerView" />

    <!-- The user's color palette data will be displayed in this RecyclerView -->
    <androidx.recyclerview.widget.RecyclerView
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_colorPickerRecyclerView"
        android:layout_width="0dp"
        android:layout_height="50dp"
        android:layout_marginStart="16dp"
        android:layout_marginEnd="16dp"
        android:orientation="horizontal"
        app:layoutManager="androidx.recyclerview.widget.LinearLayoutManager"
        app:layout_constraintBottom_toBottomOf="@+id/activityCanvas_topView"
        app:layout_constraintEnd_toStartOf="@+id/activityCanvas_colorSwitcherView"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="@+id/activityCanvas_primaryFragmentHost"
        tools:listitem="@layout/color_picker_layout" />

    <!-- This FrameLayout is crucial when it comes to the calculation of the TransparentBackgroundView and PixelGridView -->
    <FrameLayout
        android:id="@+id/activityCanvas_distanceContainer"
        android:layout_width="0dp"
        android:layout_height="0dp"
        app:layout_constraintBottom_toTopOf="@+id/activityCanvas_tabLayout"
        app:layout_constraintEnd_toEndOf="@+id/activityCanvas_primaryFragmentHost"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toBottomOf="@+id/activityCanvas_topView" />

    <!-- This gives both views (the PixelGridView and TransparentBackgroundView) a nice drop shadow -->
    <com.google.android.material.card.MaterialCardView
        android:id="@+id/activityCanvas_cardView"
        style="@style/activityCanvas_canvasFragmentHostCardViewParent_style"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        app:layout_constraintBottom_toTopOf="@+id/activityCanvas_tabLayout"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toBottomOf="@+id/activityCanvas_topView">
        <!-- At runtime, the width and height of the TransparentBackgroundView and PixelGridView will be calculated -->
       <com.therealbluepandabear.pixapencil.customviews.transparentbackgroundview.TransparentBackgroundView
            android:id="@+id/activityCanvas_transparentBackgroundView"
            android:layout_width="0dp"
            android:layout_height="0dp" />

        <com.therealbluepandabear.pixapencil.customviews.pixelgridview.PixelGridView
            android:id="@+id/activityCanvas_pixelGridView"
            android:layout_width="0dp"
            android:layout_height="0dp" />
    </com.google.android.material.card.MaterialCardView>

    <!-- The primary tab layout -->
    <com.google.android.material.tabs.TabLayout
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_tabLayout"
        android:layout_width="0dp"
        android:layout_height="wrap_content"
        android:tabStripEnabled="false"
        app:layout_constraintBottom_toTopOf="@+id/activityCanvas_viewPager2"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent">
        <com.google.android.material.tabs.TabItem
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="@string/activityCanvas_tab_tools_str" />

        <com.google.android.material.tabs.TabItem
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="@string/activityCanvas_tab_filters_str" />

        <com.google.android.material.tabs.TabItem
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="@string/activityCanvas_tab_color_palettes_str" />

        <com.google.android.material.tabs.TabItem
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="@string/activityCanvas_tab_brushes_str" />
    </com.google.android.material.tabs.TabLayout>

    <!-- This view allows move functionality -->
    <View
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_moveView"
        android:layout_width="0dp"
        android:layout_height="0dp"
        android:background="@android:color/transparent"
        app:layout_constraintBottom_toBottomOf="@+id/activityCanvas_distanceContainer"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toBottomOf="@+id/activityCanvas_topView" />

    <!-- The tools, palettes, brushes, and filters fragment will be displayed inside this ViewPager -->
    <androidx.viewpager2.widget.ViewPager2
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_viewPager2"
        android:layout_width="0dp"
        android:layout_height="110dp"
        app:layout_constraintBottom_toBottomOf="@+id/activityCanvas_primaryFragmentHost"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent" />

    <!-- This CoordinatorLayout is responsible for ensuring that the app's snackbars can be swiped -->
    <androidx.coordinatorlayout.widget.CoordinatorLayout
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_coordinatorLayout"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent" />

    <!-- All of the full page fragments will be displayed in this fragment host -->
    <FrameLayout
        android:elevation="20dp"
        android:outlineProvider="none"
        android:id="@+id/activityCanvas_primaryFragmentHost"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>

How can I detect touch events properly over a view?



from Checking if finger is over certain view not working in Android

This document requires 'TrustedScriptURL' assignment

After adding require-trusted-types-for 'script'; in my Content-Security-Policy header, which introduced from Chrome 83 Beta to help lock down DOM XSS injection sinks,

when I open my website, it becomes a blank page. I got many these three kinds of errors in my console. (Chrome version 83.0.4103.61)

This document requires 'TrustedScript' assignment.

This document requires 'TrustedScriptURL' assignment.

TypeError: Failed to set the 'src' property on 'HTMLScriptElement': This document requires 'TrustedScriptURL' assignment.

I have read the article Prevent DOM-based cross-site scripting vulnerabilities with Trusted Types. However, the article only says how to handle TrustedHTML, but not TrustedScript or TrustedScriptURL.

Any guide will be helpful. Thanks!



from This document requires 'TrustedScriptURL' assignment

Check whether an URL is already open or not in Background script

I send message using chrome.runtime.sendMessage({}); from my content.js and it is received by background script which opens a HTML file:

background.js

chrome.runtime.onMessage.addListener(function (request, sender, sendResponse) {
    chrome.tabs.create({url: 'popup.html'});
});

If the popup.html is already open, I don't want to open it again, so there should be a ifcondition to check whether it is already open.

But what do I put in side the if condition before chrome.tabs.create({url: 'popup.html'}); in the background script?

Please note that I am looking the solution inside banckground script.

Please provide the solution according to the scripts given in this answer.



from Check whether an URL is already open or not in Background script

Wednesday, 27 July 2022

Extract Video Frames from SDP Output

Does anyone know how to extract image frames from a SDP video output? I'm using a Nest battery camera. The wired version gave me an RTSP stream which was easy to extract frames. However, the battery version gave me a SDP output which is hard to make sense of. I've looked at a few posts on stackoverflow but none seemed too promising:

How to use the answerSDP returned from sdm.devices.commands.CameraLiveStream.GenerateWebRtcStream to establish a stream with google nest cam

Executing FFmpeg recording using in-line SDP

Even being able to stream SDP to a mp4 file using ffplay would be a nice start. But ultimately I would like to run a python script to extract frames from SDP output.

I must admit, SDP (session description protocol) seems pretty long and complicated compared to working with RTSP streams. Anyway to simply convert an SDP stream to a RTSP stream?

https://andrewjprokop.wordpress.com/2013/09/30/understanding-session-description-protocol-sdp/

Thanks! Jacob

SDP output looks something like this:

v=0\r\no=- 0 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE 0 2 1\r\na=msid-semantic: WMS 16733765853514488918/633697675 virtual-6666\r\na=ice-lite\r\nm=audio 19305 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 142.250.9.127\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=candidate: 1 udp 2113939711 2607:f8b0:4002:c11::7f 19305 typ host generation 0\r\na=candidate: 1 tcp 2113939710 2607:f8b0:4002:c11::7f 19305 typ host tcptype passive generation 0\r\na=candidate: 1 ssltcp 2113939709 2607:f8b0:4002:c11::7f 443 typ host generation 0\r\na=candidate: 1 udp 2113932031 142.250.9.127 19305 typ host generation 0\r\na=candidate: 1 tcp 2113932030 142.250.9.127 19305 typ host tcptype passive generation 0\r\na=candidate: 1 ssltcp 2113932029 142.250.9.127 443 typ host generation 0\r\na=ice-ufrag:UVDO0GOJASABT95E\r\na=ice-pwd:FRILJDCJZCH+51YNWDGZIN0K\r\na=fingerprint:sha-256 24:53:14:34:59:50:89:52:72:58:04:57:71:BB:C4:89:91:3A:52:EF:C0:5A:A5:EC:B5:51:64:80:AC:13:89:8A\r\na=setup:passive\r\na=mid:0\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=sendrecv\r\na=msid:virtual-6666 virtual-6666\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1\r\na=ssrc:6666 cname:6666\r\nm=video 9 UDP/TLS/RTP/SAVPF 108 109\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:UVDO0GOJASABT95E\r\na=ice-pwd:FRILJDCJZCH+51YNWDGZIN0K\r\na=fingerprint:sha-256 24:53:14:34:59:50:89:52:72:58:04:57:71:BB:C4:89:91:3A:52:EF:C0:5A:A5:EC:B5:51:64:80:AC:13:89:8A\r\na=setup:passive\r\na=mid:1\r\na=extmap:2 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:13 urn:3gpp:video-orientation\r\na=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=sendrecv\r\na=msid:16733765853514488918/633697675 16733765853514488918/633697675\r\na=rtcp-mux\r\na=rtpmap:108 H264/90000\r\na=rtcp-fb:108 transport-cc\r\na=rtcp-fb:108 ccm fir\r\na=rtcp-fb:108 nack\r\na=rtcp-fb:108 nack pli\r\na=rtcp-fb:108 goog-remb\r\na=fmtp:108 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f\r\na=rtpmap:109 rtx/90000\r\na=fmtp:109 apt=108\r\na=ssrc-group:FID 633697675 3798748564\r\na=ssrc:633697675 cname:633697675\r\na=ssrc:3798748564 cname:633697675\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=ice-ufrag:UVDO0GOJASABT95E\r\na=ice-pwd:FRILJDCJZCH+51YNWDGZIN0K\r\na=fingerprint:sha-256 24:53:14:34:59:50:89:52:72:58:04:57:71:BB:C4:89:91:3A:52:EF:C0:5A:A5:EC:B5:51:64:80:AC:13:89:8A\r\na=setup:passive\r\na=mid:2\r\na=sctpmap:5000 webrtc-datachannel 1024\r\n



from Extract Video Frames from SDP Output

How do I get the actual size in bytes for a number and a string in JavaScript in a browser environment?

I am trying to get the actual size (in bytes) of a number and a string in browsers e.g. chrome.

I learned that in JavaScript numbers are represented in double precision takes up 64 bits and strings are UTF-16 code unit so it takes either 2 bytes or 4 bytes.

I first tried to use new Blob but it encodes string component characters as UTF-8 not UTF-16. And I know there is a Buffer.from API in Node but it is not available in a browser environment.

My question is how I can get the actual size of a number and a string in bytes from a browser, e.g. chrome?



from How do I get the actual size in bytes for a number and a string in JavaScript in a browser environment?

Firebase security rules checks for the incoming request and database collection email

I'm creating a Flutter Todo app that allow users to add a task for himself or he could send it to another user via their account email.

My Firebase database have the following fields: title, isChecked, recipient, sender, senderUID enter image description here

My current Firebase security rules are as following

rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {

 function isOwnerOrAdmin(reminder, auth) {
  let isOwner = auth.token.email == reminder.recipient;
  let isAdmin = auth.token.isAdmin == true;
  return isOwner || isAdmin;
 }

 match /reminders/{reminder} {
  allow create: if
        // User is author
        request.auth.uid == request.resource.data.senderUID;
  
  allow update: 
        // User is recipient or admin
        if isOwnerOrAdmin(resource.data, request.auth) &&
        // only 'title' and 'isChecked' could be modified
        request.resource.data.diff(resource.data).unchangedKeys().hasAll([
          "recipient",
          "sender",
          "senderUID"
          ]);
          
  // Can be read or deleted by recipent or admin
  allow read, delete: if isOwnerOrAdmin(resource.data, request.auth);
  }

In my code, I'm using the following code to make updates to a task,

var collection = _firestore.collection('reminders');
var snapshot = await collection.where('title', isEqualTo: task.title).where('recipient', isEqualTo: loggedInUser.email.toString()).get();
await snapshot.docs.first.reference.update({'isChecked': task.isChecked});

Similarly, the following code is used to delete a task

var collection = _firestore.collection('reminders');
var snapshot = await collection.where('title', isEqualTo: task.title).where('recipient', isEqualTo: loggedInUser.email.toString()).get();
await snapshot.docs.first.reference.delete();

Update and Delete do not work with my new set of rules with the INSUFFICIENT PERMISSION in the output, what did I do wrong? I could only create new doccument, but can't update or delete it (via code).



from Firebase security rules checks for the incoming request and database collection email

ReactJS + Material-UI: How to alternate colors between Material-UI 's ?

How to find the actual sentence from sentence transformer?

I am trying to do semantic search with sentence transformer and faiss.

I am able to generate emebdding from corpus and perform query with the query xq. But what are t

from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("flax-sentence-embeddings/st-codesearch-distilroberta-base")

def get_embeddings(code_snippets: str):
    return model.encode(code_snippets)

def build_vector_database(atlas_datapoints):
    dimension = 768  # dimensions of each vector

    corpus = ["tom loves candy",
                    "this is a test"
                    "hello world"
                    "jerry loves programming"]

    code_snippet_emddings = get_embeddings(corpus)
    print(code_snippet_emddings.shape)

    d = code_snippet_emddings.shape[1]
    index = faiss.IndexFlatL2(d)
    print(index.is_trained)

    index.add(code_snippet_emddings)
    print(index.ntotal)

    k = 2
    xq = model.encode(["jerry loves candy"])

    D, I = index.search(xq, k)  # search
    print(I)
    print(D)

This code returns

[[0 1]]
[[1.3480902 1.6274161]]

But I cant find which sentence xq is matching with and not the matching scores only.

How can I find the top-N matching string from the corpus.



from How to find the actual sentence from sentence transformer?

How to avoid setting manual UIView width and height on native side for Fabric component when using CAGradientLayer?

I am trying to create a fabric module for iOS using react-native's new architecture

In my objective-c++ file while setting a UIView, I have to assign a CGRect at time of init of UIView. If I don't on the native side and just give it on js side the view is not visible.

Following does not work

Objective-C++

_view = [[UIView alloc] init];
_gradient = [CAGradientLayer layer];
_gradient.frame = _view.bounds;
    _gradient.startPoint = CGPointZero;
    _gradient.endPoint = CGPointMake(1, 1);
 _gradient.colors = @[(id)[UIColor redColor].CGColor,(id)[UIColor blackColor].CGColor,(id)[UIColor blueColor].CGColor];
    [_view.layer insertSublayer:_gradient atIndex:0];
self.contentView = _view;

JS

 <YourEdgeLinearGradientView
          style=
/>

Following works

Objective-C++ 

_view = [[UIView alloc] initWithFrame:CGRectMake(0,0,400,400)];
...
...

JS

 <YourEdgeLinearGradientView
          style=
/>

but the issue is it occupies the width and height set on the native side and ignores js side I want to use

_view = [[UIView alloc] init];

without setting width and height on native side but setting it on js side

To add on more to this if I take a UIButton or a UILAbel instead of CAGradientLayer and apply constraints to UIView then I don't have to set CGRectMake to UIView

Also I don't want to pass width and height from the specs file, whatever I set from the style property should be applied. It works fine for android component but not for iOS.

Check this repo https://github.com/PritishSawant/-ReactNativeMultipleTurboModulesAndFabricExample

Go to the iOS folder => RNYourEdgeLinearGradientView.mm , https://github.com/PritishSawant/-ReactNativeMultipleTurboModulesAndFabricExample/blob/main/ios/RNYourEdgeLinearGradientView.mm

I am currently using

view = [[UIView alloc] init];

which causes issue and is not visible on JS side

If I use like below

_view = [[UIView alloc] initWithFrame:CGRectMake(0,0,200,200)];

then it works but I don't want to set width and height on iOS side. I want to use view = [[UIView alloc] init]; and whatever width and height I define in styles on react-native side, the view should occupy that



from How to avoid setting manual UIView width and height on native side for Fabric component when using CAGradientLayer?

Tuesday, 26 July 2022

Auto focus EditText that sits inside RecyclerView adapter

I've got a Fragment, which has a RecyclerView. This RecyclerView has a header adapter, which contains an EditText.

When the Fragment comes into view, I want the EditText to auto focus (i.e. show the keyboard automatically).

How would I do this?

The Fragment:

public class My Fragment extends Fragment {

    private LinearLayoutManager layoutManager;

    private FeedRecyclerView recyclerView;

    private PostAdapter postAdapter;

    private ConcatAdapter concatAdapter;

    private HeaderAdapter headerAdapter;

    private List<Post> postData = new ArrayList<>();

    @Override
    public View onCreateView(LayoutInflater inflater, ViewGroup parent, Bundle savedInstanceState) {
        return inflater.inflate(R.layout.my_fragment, parent, false);
    }

    @Override
    public void onViewCreated(@NonNull View view, Bundle savedInstanceState) {
        recyclerView = view.findViewById(R.id.recycler_view);

        layoutManager = new FeedLinearLayoutManager(getContext());

        recyclerView.setLayoutManager(layoutManager);
        recyclerView.setItemAnimator(new DefaultItemAnimator());

        // This is the adapter that contains the EditText
        headerAdapter = new HeaderAdapter(getContext());

        postAdapter = new PostAdapter(getContext(), postData, layoutManager);

        concatAdapter = new ConcatAdapter();
        recyclerView.setAdapter(concatAdapter);

        concatAdapter.addAdapter(searchHeaderAdapter);
    }
}

The adapter class, HeaderAdapter:

public class HeaderAdapter extends RecyclerView.Adapter<HeaderAdapter.ViewHolder> {

    private Context context;

    public HeaderAdapter(Context context) {
        this.context = context;
    }

    @NonNull
    @Override
    public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
        return new ViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.my_header, parent, false));
    }

    @Override
    public void onBindViewHolder(@NonNull ViewHolder holder, int position) {
        //
    }

    @Override
    public long getItemId(int position) {
        return 1;
    }

    @Override
    public int getItemCount() {
        return 1;
    }

    public class ViewHolder extends RecyclerView.ViewHolder {
        public EditText myEditText;

        public ViewHolder(View v) {
            super(v);

            myEditText = v.findViewById(R.id.my_edit_text);
        }
    }

}

As you can see, my RecyclerView's adapter, HeaderAdapter, has the EditText. When the Fragment gets shown, I want to auto focus this EditText.

How would I do this?

Thank you



from Auto focus EditText that sits inside RecyclerView adapter

Xamarin Android Library Binding - Class does not contain a definition

Xamarin Play Core (the package for reviews & tasks needed to launch reviews) split for v2.0.0 into individual packages, so I'm trying to create a Xamarin Android bindings library for the review and tasks. I successfully got tasks working but I am getting this error for the review nuget when it should just work according to their docs. It might be a simple fix, I thought I just had to add these lines in the metadata file but that didn't fix it:

<attr path="/api/package[@name='com.google.android.play.core.review']/class[@name='IReviewManager']" name="extends">Java.Lang.Object</attr> 
<attr path="/api/package[@name='com.google.android.play.core.review.testing']/class[@name='FakeReviewManager']" name="extends">Java.Lang.Object</attr>

Here's the error I get:

...PlayCoreUpdateTest/PlayCoreUpdateTest.Android/InAppReviewService.cs(35,35): Error CS1061: 'FakeReviewManager' does not contain a definition for 'LaunchReviewFlow' and no accessible extension method 'LaunchReviewFlow' accepting a first argument of type 'FakeReviewManager' could be found (are you missing a using directive or an assembly reference?) (CS1061) (PlayCoreUpdateTest.Android)

Screenshot of the error described above I know the magic happens in the generated api.xml file, so here's a code dump of it. In there I do see the FakeReviewManager, but I don't see the RequestReviewFlow, when it should be there

It's still a work in progress but those are the only remaining issues, here's the GitHub code. I watched Jonathan Dick's video on creating Xamarin library bindings and tried the Microsoft FAQ options here too

I know my functions in InAppReviewService.cs are correct because that's what the official docs tell us to use, and it's what we were using before for v1.10.

Update: I did notice there's an api.xml.class-parse file beside api.xml, that does contain the missing method LaunchReviewFlow inside the FakeReviewManager class. I'm trying to understand why it didn't show up in api.xml. Here's a dump for that one. I did notice that in the api.xml.class-parse file, there are these lines

return="com.google.android.play.core.tasks.Task&lt;java.lang.Void&gt;"
jni-return="Lcom/google/android/play/core/tasks/Task&lt;Ljava/lang/Void;&gt;;"
...
jni-signature="(Landroid/app/Activity;Lcom/google/android/play/core/review/ReviewInfo;)Lcom/google/android/play/core/tasks/Task;"

which that the v2.0 AAB file points to a play core tasks library even though it recommends pointing to the GMS version of Tasks Screenshot of above described

To counter that, I tried to add variations of these lines, but none of that helped:

  <add-node path="/api/package[@name='com.google.android.play.core.review.testing']/class[@name='FakeReviewManager']">
    <method abstract="false" deprecated="not deprecated" final="false" name="launchReviewFlow" jni-signature="(Landroid/app/Activity;Lcom/google/android/play/core/review/ReviewInfo;)Lcom/google/android/gms/tasks/Task;" bridge="false" native="false" return="com.google.android.gms.tasks.Task&lt;java.lang.Void&gt;" jni-return="Lcom/google/android/gms/tasks/Task&lt;Ljava/lang/Void;&gt;;" static="false" synchronized="false" synthetic="false" visibility="public" return-not-null="true">
        <parameter name="p0" type="android.app.Activity" jni-type="Landroid/app/Activity;" not-null="true" />
        <parameter name="reviewInfo" type="com.google.android.play.core.review.ReviewInfo" jni-type="Lcom/google/android/play/core/review/ReviewInfo;" not-null="true" />
    </method>
    <method abstract="false" deprecated="not deprecated" final="false" name="requestReviewFlow" jni-signature="()Lcom/google/android/gms/tasks/Task;" bridge="false" native="false" return="com.google.android.gms.tasks.Task&lt;com.google.android.play.core.review.ReviewInfo&gt;" jni-return="Lcom/google/android/gms/tasks/Task&lt;Lcom/google/android/play/core/review/ReviewInfo;&gt;;" static="false" synchronized="false" synthetic="false" visibility="public" return-not-null="true" />
  </add-node>


from Xamarin Android Library Binding - Class does not contain a definition

Storybook webpack absolute import

In our app we are using absolute paths for import modules. We have react folder into our resolve root:

Folder structure

We are using webpack for build and develop app and it works ok, with the next options:

  resolve: {
    modules: [
      'node_modules',
      path.resolve('src')
    ]
  },

I'm working on integration of storybook and found, that it can't find any module from this react folder.

ERROR in ./stories/index.stories.js
Module not found: Error: Can't resolve 'react/components/Button' in 'project_name/stories'
 @ ./stories/index.stories.js

for the next line: import Button from 'react/components/Button';

As mark: I added resolve/modules to .storybook/webpack config and also if I try to import anything other from, for example services/xxx - it works.



from Storybook webpack absolute import

Implement concurrent data fetch from Azure Redis cache in python

I am currently working on building low latency model inference API using fast API, we are using azure redis cache standard version for fetching features and onnx model for fast model inference. I am using aioredis to implement concurrency for data read in redis. I am calling two feature request from redis one for userID that fetch single string and other for product that fetches list of strings, this later I convert to list of float using json parsing.

For one request overall its taking 70-80ms but for more than 10 concurrent request the redis is taking more than 400ms to fetch results which is huge and can increase linearly over more concurrent users while load testing.

The code for getting data from redis is:

import numpy as np
import json
from ..Helpers.helper import curt_giver, milsec_calc
import aioredis
r = aioredis.from_url("redis://user:host",decode_responses=True)

async def get_user(user:list) -> str:
    user_data = await r.get(user)
    return user_data
async def get_products(product:list)-> list:
    product_data = await r.mget(product)
    return product_data

async def get_features(inputs: dict) -> list:
    
    st = curt_giver()
    user_data = await get_user(inputs['userId'])
    online_user_data = [json.loads(json.loads(user_data))]
    end = curt_giver()
    print("Time to get user features: ", milsec_calc(st,end))
    
    st = curt_giver()
    product_data = await get_products(inputs['productIds'])
    online_product_data = []
    for i in product_data:
        online_product_data.append(json.loads(json.loads(i)))
    end = curt_giver()
    print("Time to get product features: ", milsec_calc(st,end))

    user_outputs = np.asarray(online_user_data,dtype=object)
    product_outputs = np.asarray(online_product_data,dtype=object)
    output = np.concatenate([np.concatenate([user_outputs]*product_outputs.shape[0])
    ,product_outputs],axis = 1)
    return output.tolist()

curt_giver() is time in milliseconds. The code from main file is:

    from fastapi import FastAPI
    from v1.redis_conn.get_features import get_features
    
    from model_scoring.score_onnx import score_features
    from v1.post_processing.sort_results import sort_results
    
    from v1.api_models.input_models import Ranking_Input
    from v1.api_models.output_models import Ranking_Output
    from v1.Helpers.helper import curt_giver, milsec_calc
    import numpy as np
    
    
    app = FastAPI()
    
    # Sending user and product ids through body, 
    # Hence a POST request is well suited for this, GET has unexpected behaviour
    @app.post("/predict", response_model = Ranking_Output)
    async def rank_products(inp_req: Ranking_Input):
      beg = curt_giver()
      reqids = inp_req.dict()
      st = curt_giver()
      features = await get_features(reqids)
      end = curt_giver()
    
      print("Total Redis duration ( user + products fetch): ", milsec_calc(st,end))
    
      data = np.asarray(features,dtype=np.float32,order=None)
      
      st = curt_giver()
      scores = score_features(data)
      end = curt_giver()
    
      print("ONNX model duration: ", milsec_calc(st,end))
    
      Ranking_results = sort_results(scores, list(reqids["productIds"]))
      end = curt_giver()
      print("Total time for API: ",milsec_calc(beg,end))
      resp_json = {"requestId": inp_req.requestId,
      "ranking": Ranking_results,
      "zipCode": inp_req.zipCode}
    
      return resp_json    

Through the timings I can read that for one request its taking very less time but for concurrent user the time for getting product data is keep on increasing linearly. Time to fetch one request all values are in milliseconds:

Time to get user features:  1
Time to get product features:  47
Total Redis duration ( user + products fetch):  53
ONNX model duration:  2
Total time for API:  60

Time to fetch for more than 10 concurrent request:

Time to get user features:  151
Time to get user features:  150
Time to get user features:  151
Time to get user features:  52
Time to get user features:  51
Time to get product features:  187
Total Redis duration ( user + products fetch):  433
ONNX model duration:  2
Total time for API:  440
INFO:     127.0.0.1:60646 - "POST /predict HTTP/1.0" 200 OK
Time to get product features:  239
Total Redis duration ( user + products fetch):  488
ONNX model duration:  2
Total time for API:  495
INFO:     127.0.0.1:60644 - "POST /predict HTTP/1.0" 200 OK
Time to get product features:  142
Total Redis duration ( user + products fetch):  297
ONNX model duration:  2
Total time for API:  303
INFO:     127.0.0.1:60648 - "POST /predict HTTP/1.0" 200 OK
Time to get product features:  188
Total Redis duration ( user + products fetch):  342
ONNX model duration:  2
Total time for API:  348

Its keep on increasing for more, hitting even 900ms+ to fetch both data from redis, Is there any way I can efficiently fetch concurrent data with low latency and increasing concurrent request like 500 and doesn't effect the latency, my target is under 300ms for 300 request concurrently every second.

I am stuck at this point any help, I will be very grateful.



from Implement concurrent data fetch from Azure Redis cache in python

How can I get and set the "Super Fast Charging" setting programmatically?

Just an example, I can get the Display Timeout setting like this:

int timeout = Settings.System.getInt(getContentResolver(), Settings.System.SCREEN_OFF_TIMEOUT);

I can set the Display Timeout setting like this:

Settings.System.putInt(getContentResolver(), Settings.System.SCREEN_OFF_TIMEOUT, 10000);

How can I programmatically get and set the Fast Charging and the Super Fast Charging settings?



from How can I get and set the "Super Fast Charging" setting programmatically?

react component displaying in first render but not in the following re-render

I have a react component that consists in an alert to which I am passing 2 props, (however these 2 props rarely change within this component)

const AlertPopUp = ({ severity, errors }) => {
  const [show, setShow] = useState(true)
  console.log('show value of show state: ', show)

  useEffect(() => {
    console.log('gets here')
    const timeId = setTimeout(() => {
      // After 5 seconds set the show value to false
      setShow(false)
    }, 5000)

    return () => {
      clearTimeout(timeId)
    }
  });

  console.log('errors ', errors)

  if (!errors)  return null
  if (show) 
    return (
      <>
        <Alert severity={severity}>
          {errors}
        </Alert>
      </>
    )
}

In the first render, the Alert shows up and after the expected 5 seconds the component disappears.

In the re-render, the Alert does not show up anymore, and from my debugging I assume it has to do to with the line console.log('show value of show state: ', show) which displays false in the re-render.

If I do a setShow(true) I run into an infinite loop of re-renders.

If I use a useRef to avoid the useState infinite loop the component doesn't re-render and therefore the Alert never displays.

If I try to set a key to the component key=useId()/ pass a counter , the state is still set to false whenever the parent component rerenders, looking like the component doesn't destroy and create again.

Please forgive me if I made any of my assumptions wrongly as I am far from being a react expert.

Could please anyone help me find a solution so that the Alert displays in every render of the alert component and disappears after the 5 seconds?



from react component displaying in first render but not in the following re-render

Monday, 25 July 2022

React native app gets crash logs with "SIGSEGV: Segmentation violation (invalid memory reference) "

I've just integrated the bugsnag in my react native app and suddenly I started receiving Segmentation violation (invalid memory reference) reports for the android phones only. The logs don't show where the error happens, I can see only this line -onlyreanimated::NativeProxy::installJSIBindings()

The app doesn't crash in development(on simulators) or during the testing, the breadcrumbs are not useful either.

Has anyone received similar logs and how did you pinpoint what the issue is? Any advice is greatly appreciated as I'm going in circles trying to figure this out.



from React native app gets crash logs with "SIGSEGV: Segmentation violation (invalid memory reference) "

How to take camera picture from google ml kit face detector?

i try camera picture use https://github.com/fernandoptrr/flutter-camera-practice, this work fine with XFile picture = await _cameraController.takePicture();

then try google ml kit example, https://github.com/bharat-biradar/Google-Ml-Kit-plugin . I try face detector and try screenshot with takePicture(). It give "Error occured while taking picture cannot use a surface that wasn't configured".

can someone give me example take picture camera with google ml kit?



from How to take camera picture from google ml kit face detector?

Styled components workflow in Mui - how to style

I am working on a new react project and using material ui with styled components and having some trouble with styling and organization. I've read up on styled components and best practices but I'm still unable to find a solution to styling in big projects.

I've seen many suggestions:

  • to create styled components for each time some css needs to be added and storing all these components inside styled.ts file. This approach removes all styling and layout concerns into a separate file but is a lot of work to create styled components for everything
  • to make a wrapper styled components around the main react component and use class names - kind of like importing a css file regularly and working with classes
  • to make inline css changes if some padding is needed or something and only make styled components for reusable/lengthier css blocks. Doesn't really separate styling and is not as clean since we're leaving inline css but is easier to write
  • to treat styled components as regular components, have them in separate files and everything. A component is a component, no longer needing to distinguish between stlying and componentization

Not saying any of these are bad suggestions, but they're quite conflicting and I don't have experience with this. I'd just like to know a scalable approach.

tldr: Is there a good and clean workflow for working with styled components for big projects?



from Styled components workflow in Mui - how to style

App process does not die when stopping a bound foreground service

I have a bound foreground service that is supposed to live even when the app is removed from recents, but with the option to also stop the service when the app gets removed. That optional path is what I am having trouble with.

If the service is not set as foreground, stopping the service causes the process to die as well. This is the desired effect:

enter image description here

However, if the service is set as foreground, the app process does not die:

enter image description here

What I tried:

  1. Stopping from onTaskRemoved inside the service:
    override fun onTaskRemoved(rootIntent: Intent) {
        stopForeground(true)
        stopSelf()
    }
  1. Setting stopAppWithTask to true in the manifest:
   <service
        android:name=".TestService" android:stopWithTask="true">
   </service>
  1. Stopping via activity:
override fun onStop() {
    super.onStop()
    unbindService(connection)
  
    mBound = false
}

override fun onDestroy() {
    Intent(applicationContext, TestService::class.java).also { intent ->
        stopService(intent)
    }
    super.onDestroy()
}
  1. Same as .3, but I stop the service as foreground in onStop. This works, but it's not ideal, as the service will stop being foreground every time the app is put to recents.
    override fun onStop() {
        super.onStop()
        mService.stopForeground()
        unbindService(connection)
        mBound = false
    }

    override fun onDestroy() {
        Intent(applicationContext, TestService::class.java).also { intent ->
            stopService(intent)
        }
        super.onDestroy()
    }
    // in the service
    fun stopForeground() {
        stopForeground(true)
    }

The questions I have are the following:

  1. Is the app process not stopping the desired effect?
  2. If yes, why does the process never die? I would be fine with the OS eventually killing the service, but it never happens (from what I have noticed)
  3. Why does stopping foreground in onStop lead to the process dying properly, but stopping it in onTaskRemoved keeps the process alive.


from App process does not die when stopping a bound foreground service

sending string from C# to client and converting into Uint8Array type byte array and then into blob to open excel file. Corgi Involved

So here in C# code i am sending corgi to client which has corgiBabies

   corgi.Corgibabies = System.Text.Encoding.UTF8.GetString(Corgibabies);
        
     return corgi;

After that in Client i want to open corgibabies in excel sheet but the conversion here is wrong somewhere i think that excel sheet doesn't open correctly.

var fileName = 'CorgiBabies.xlsx';

     dataAccessService.get('corgi')
            .then(function(response) {
                let utf8Encode = new TextEncoder();
                var strBytes = utf8Encode.encode(response.corgiBabies);
            
                    var a = document.createElement("a");
                    document.body.appendChild(a);
                    a.style = "display: none";
                    
                    var file = new Blob([strBytes], {type: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'});
                    var fileURL = window.URL.createObjectURL(file);
                    a.href = fileURL;
                    a.download = fileName;
                    a.click();
                
            })

Below what excel sheet gives me error in image

enter image description here



from sending string from C# to client and converting into Uint8Array type byte array and then into blob to open excel file. Corgi Involved

Fit data with a lognormal function via Maximum Likelihood estimators

Could someone help me in fitting the data collapse_fractions with a lognormal function, which has median and standard deviation derived via the maximum likelihood method?

I tried scipy.stats.lognormal.fit(data), but I did not obtain the data I retrieved with Excel. The excel file can be downloaded: https://stacks.stanford.edu/file/druid:sw589ts9300/p_collapse_from_msa.xlsx

Also, any reference is really welcomed.

import numpy as np

intensity_measure_vector = np.array([[0.2, 0.3, 0.4, 0.6, 0.7, 0.8, 0.9, 1]])    
no_analyses = 40    
no_collapses = np.array([[0, 0, 0, 4, 6, 13, 12, 16]])    
collapse_fractions = np.array(no_collapses/no_analyses)

print(collapse_fractions)
# array([[0.   , 0.   , 0.   , 0.1  , 0.15 , 0.325, 0.3  , 0.4  ]])

collapse_fractions.shape
# (1, 8)

import matplotlib.pyplot as plt
plt.scatter(intensity_measure_vector, collapse_fractions)


from Fit data with a lognormal function via Maximum Likelihood estimators

setuptools post-install: get all packages installed to check versions

In Post-install script with Python setuptools, this answer shows how to make a post-install command.

I want to make a post-install command that checks for a version matchup between sub-packages from a mono-repo.

How can I get a list of packages being installed during a post-install command?


Current Attempt

from pkg_resources import working_set
from setuptools import setup
from setuptools.command.install import install


class PostInstallCommand(install):
    REPO_BASE_NAME = "foo"

    def run(self) -> None:
        install.run(self)

        one_subpkg_name = f"{self.REPO_BASE_NAME}-one"
        another_subpkg_name = f"{self.REPO_BASE_NAME}-another"
        test_subpkg_name = f"{self.REPO_BASE_NAME}-test"
        all_versions: list[str] = [
            working_set.by_key[one_subpkg_name].version,
            working_set.by_key[another_subpkg_name].version,
            working_set.by_key[test_subpkg_name].version,
        ]
        if len(set(all_versions)) != 1:
            raise NotImplementedError(
                f"test package {test_subpkg_name}'s installed versions "
                f"{all_versions} have a mismatch."
            )


setup(
    ...,
    cmdclass={"install": PostInstallCommand}
)

This solution using pkg_resources.working_set errors out, it seems working_set doesn't function at install-time:

        ...
        File "/private/var/folders/41/wlbjqvm94zn1_vbrg9fqff8m0000gn/T/pip-build-env-1iljhvso/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
          super().run_command(command)
        File "/private/var/folders/41/wlbjqvm94zn1_vbrg9fqff8m0000gn/T/pip-build-env-1iljhvso/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
          cmd_obj.run()
        File "<string>", line 39, in run
      KeyError: 'foo-one'


from setuptools post-install: get all packages installed to check versions

Sunday, 24 July 2022

When running headless selenium in cron, got error "Pyperclip could not find a copy/paste mechanism for your system"

I have implemented a selenium script in Python to upload some pictures and content to Facebook, which I named FBUpload.py.

When I launch it this way, it works perfectly (in headless mode):

Xvfb :10 -ac &
python3 /home/someuser/scripts/FBUpload.py

Problem is, when I try to configure a cronjob that launches this same script, this way:

00 * * * * Xvfb :10 -ac &
01 * * * * python3 /home/someuser/scripts/FBUpload.py
45 * * * * kill -9 $(ps -auxw |grep Xvf|head -1| awk '{print $2}')

Then it fails with the following error:

Pyperclip could not find a copy/paste mechanism for your system

This is my setup: Ubuntu 20.04.4 LTS | Python3 | pyperclip 1.7.0

These are the Copy & paste mechanisms that I already installed:

PyQt5 5.15.6
PyQt5-Qt5 5.15.2
PyQt5-sip 12.10.1
QtPy 2.1.0
xclip 0.13-1 (in /usr/bin because it was installed via apt)
xsel 1.2.0+git9bfc13d.20180109-3 (in /usr/bin because it was installed via apt)

(I couldn't download PyQt4 or qkt as described in this post: pyperclip module raising an error message so I downloaded QtPy following the suggested solution. But the problem persists.)

I tried the fixes from posts with similar issue but none of them work for me. I am wondering if the issue has to do with users (because when I run the script with "sudo", the root user cannot find the libraries installed by the non-root user).

I also found this other question which seems to be similar (but instead of cron, the problem is systemd): Ubuntu 16.04 - Python 3 - Pyperclip in terminal and via systemd



from When running headless selenium in cron, got error "Pyperclip could not find a copy/paste mechanism for your system"

Removing SEP token in Bert for text classification

Given a sentiment classification dataset, I want to fine-tune Bert.

As you know that BERT created to predict the next sentence given the current sentence. Thus, to make the network aware of this, they inserted a [CLS] token in the beginning of the first sentence then they add [SEP] token to separate the first from the second sentence and finally another [SEP] at the end of the second sentence (it's not clear to me why they append another token at the end).

Anyway, for text classification, what I noticed in some of the examples online (see BERT in Keras with Tensorflow hub) is that they add [CLS] token and then the sentence and at the end another [SEP] token.

Where in other research works (e.g. Enriching Pre-trained Language Model with Entity Information for Relation Classification) they remove the last [SEP] token.

Why is it/not beneficial to add the [SEP] token at the end of the input text when my task uses only single sentence?



from Removing SEP token in Bert for text classification

How to use the new ComponentActivity with ViewBinding and the other old AppCompatActivity components

According to this question, I tried to update my deprecated menus codes like setHasOptionsMenu , onCreateOptionsMenu and onOptionsItemSelected in my fragments and all app, but I should replace AppCompatActivity to ComponentActivity(R.layout.activity_example) but after doing this I see there's some problem, first I confused about how to use ViewBinding with it when I should remove setContentView(binding.root) from activity second the method setSupportActionBar(binding.appBarMain.toolbar) is not found, and I couldn't use the navigation components like supportFragmentManager and setupActionBarWithNavController the third thing I couldn't"t declare this
val menuHost: MenuHost = requireActivity() in onCreateView in fragment I see it's Required: MenuHost but Found: FragmentActivity

enter image description here

menu updates

here's my MainActivity code before edits

@AndroidEntryPoint
class MainActivity : AppCompatActivity() {


    private lateinit var appBarConfiguration: AppBarConfiguration
    private lateinit var binding: ActivityMainBinding
    private lateinit var navController: NavController
    private lateinit var postViewModel: PostViewModel
    private lateinit var navGraph: NavGraph


    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)


        binding = ActivityMainBinding.inflate(layoutInflater)
        setContentView(binding.root)

        postViewModel = ViewModelProvider(this)[PostViewModel::class.java]
        postViewModel.getCurrentDestination()

        setSupportActionBar(binding.appBarMain.toolbar)

        val drawerLayout: DrawerLayout = binding.drawerLayout
        

        val navHostFragment =
            supportFragmentManager.findFragmentById(R.id.nav_host_fragment) as NavHostFragment?

        if (navHostFragment != null) {
            navController = navHostFragment.navController
        }
        navGraph = navController.navInflater.inflate(R.navigation.mobile_navigation)


        // Passing each menu ID as a set of Ids because each
        // menu should be considered as top level destinations.
        appBarConfiguration = AppBarConfiguration(
            setOf(
                R.id.nav_home, R.id.nav_accessory,
                R.id.nav_arcade, R.id.nav_fashion,
                R.id.nav_food, R.id.nav_heath,
                R.id.nav_lifestyle, R.id.nav_sports, R.id.nav_favorites, R.id.about
            ), drawerLayout
        )
//        setupActionBarWithNavController(navController, appBarConfiguration)
//        navView.setupWithNavController(navController)

        setupActionBarWithNavController(this, navController, appBarConfiguration)
        setupWithNavController(binding.navView, navController)



        
//        determineAdvertisingInfo()
    }


    override fun onSupportNavigateUp(): Boolean {
        return navController.navigateUp(appBarConfiguration) || super.onSupportNavigateUp()
    }

}

and this the implementation of menus in fragments

  override fun onCreateView(
        inflater: LayoutInflater,
        container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View {

        _binding = FragmentHomeBinding.inflate(inflater, container, false)


         setHasOptionsMenu(true)
         return binding.root
    }

....................................................................................

  override fun onCreateOptionsMenu(menu: Menu, inflater: MenuInflater) {
        inflater.inflate(R.menu.main, menu)
        super.onCreateOptionsMenu(menu, inflater)
        val searchManager =
            requireContext().getSystemService(Context.SEARCH_SERVICE) as SearchManager
        val searchView = menu.findItem(R.id.app_bar_search).actionView as SearchView
        searchView.setSearchableInfo(searchManager.getSearchableInfo(requireActivity().componentName))
        searchView.queryHint = resources.getString(R.string.searchForPosts)

        searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {
            override fun onQueryTextSubmit(keyword: String): Boolean {
                if (keyword.isEmpty()) {
                    Snackbar.make(
                        requireView(),
                        "please enter keyword to search",
                        Snackbar.LENGTH_SHORT
                    ).show()
                }
//                itemArrayList.clear()
                if (Utils.hasInternetConnection(requireContext())) {
                    postViewModel.getItemsBySearch(keyword)
                    postViewModel.searchedPostsResponse.observe(viewLifecycleOwner) { response ->

                        when (response) {
                            is NetworkResult.Success -> {
                                hideShimmerEffect()
                                itemArrayList.clear()
                                binding.progressBar.visibility = View.GONE
                                response.data?.let {
                                    itemArrayList.addAll(it.items)
                                }
                                adapter.notifyDataSetChanged()

                            }

                            is NetworkResult.Error -> {
                                hideShimmerEffect()
                                //                    loadDataFromCache()
                                Toast.makeText(
                                    requireContext(),
                                    response.toString(),
                                    Toast.LENGTH_LONG
                                ).show()

                            }

                            is NetworkResult.Loading -> {
                                if (postViewModel.recyclerViewLayout.value == "titleLayout" ||
                                    postViewModel.recyclerViewLayout.value == "gridLayout"
                                ) {
                                    hideShimmerEffect()
                                } else {
                                    showShimmerEffect()
                                }
                            }
                        }
                    }
                } else {
                    postViewModel.getItemsBySearchInDB(keyword)
                    postViewModel.postsBySearchInDB.observe(viewLifecycleOwner) { items ->
                        if (items.isNotEmpty()) {
                            hideShimmerEffect()
                            binding.progressBar.visibility = View.GONE
                            itemArrayList.clear()
                            itemArrayList.addAll(items)
                            adapter.notifyDataSetChanged()
                        }
                    }
                }
                return false

            }


            override fun onQueryTextChange(newText: String): Boolean {
                return false
            }
        })
        searchView.setOnCloseListener {
            if (Utils.hasInternetConnection(requireContext())) {
                Log.d(TAG, "setOnCloseListener: called")
                itemArrayList.clear()
                requestApiData()
            } else {
                noInternetConnectionLayout()
            }
            false
        }


        postViewModel.searchError.observe(viewLifecycleOwner) { searchError ->
            if (searchError) {
                Toast.makeText(
                    requireContext(),
                    "There's no posts with this keyword", Toast.LENGTH_LONG
                ).show()
            }
        }

    }



   override fun onOptionsItemSelected(item: MenuItem): Boolean {
        if (item.itemId == R.id.change_layout) {
            changeAndSaveLayout()
            return true
        }
        return super.onOptionsItemSelected(item)
    }


build.gradle dependencies

dependencies {

    implementation 'androidx.core:core-ktx:1.6.0'
    implementation 'androidx.appcompat:appcompat:1.4.2'

    implementation ('com.google.android.material:material:1.6.1') {
        exclude(group: 'androidx.recyclerview',  module: 'recyclerview')
        exclude(group: 'androidx.recyclerview',  module: 'recyclerview-selection')
    }
    implementation "androidx.recyclerview:recyclerview:1.2.1"
    // For control over item selection of both touch and mouse driven selection
    implementation "androidx.recyclerview:recyclerview-selection:1.1.0"

    implementation 'androidx.constraintlayout:constraintlayout:2.1.4'
    implementation 'androidx.lifecycle:lifecycle-livedata-ktx:2.5.0'
    implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.5.0'
    implementation 'androidx.navigation:navigation-fragment-ktx:2.5.0'
    implementation 'androidx.navigation:navigation-ui-ktx:2.5.0'
    testImplementation 'junit:junit:4.13.2'
    androidTestImplementation 'androidx.test.ext:junit:1.1.3'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.4.0'

    //Retrofit
    implementation 'com.squareup.retrofit2:retrofit:2.9.0'
    implementation 'com.squareup.retrofit2:converter-gson:2.9.0'

//    //Moshi
//    implementation("com.squareup.moshi:moshi:1.13.0")
//    implementation("com.squareup.retrofit2:converter-moshi:2.9.0")
//    kapt "com.squareup.moshi:moshi-kotlin-codegen:1.13.0"

    implementation 'com.github.bumptech.glide:glide:4.12.0'
    implementation 'org.jsoup:jsoup:1.14.1'
    implementation 'com.squareup.picasso:picasso:2.71828'
    implementation 'org.apache.commons:commons-lang3:3.8.1'
    implementation 'org.ocpsoft.prettytime:prettytime:4.0.1.Final'
    implementation "androidx.browser:browser:1.4.0"

    implementation 'androidx.multidex:multidex:2.0.1'
    configurations {
        all*.exclude group: 'com.google.guava', module: 'listenablefuture'
    }

    //Room
    implementation "androidx.room:room-runtime:2.4.2"
    kapt "androidx.room:room-compiler:2.4.2"
    implementation "androidx.room:room-ktx:2.4.2"
    androidTestImplementation "androidx.room:room-testing:2.4.2"




    //Dagger - Hilt
    implementation 'com.google.dagger:hilt-android:2.42'
    kapt 'com.google.dagger:hilt-android-compiler:2.42'


    //SDP & SSP
    implementation 'com.intuit.sdp:sdp-android:1.0.6'
    implementation 'com.intuit.ssp:ssp-android:1.0.6'

    // Shimmer
    implementation 'com.facebook.shimmer:shimmer:0.5.0'

    //firebase & analytics
    implementation platform('com.google.firebase:firebase-bom:28.4.0')
    implementation 'com.google.firebase:firebase-analytics'

    //crashlytics
    implementation 'com.google.firebase:firebase-crashlytics'
    implementation 'com.google.firebase:firebase-analytics'

    // DataStore
    implementation 'androidx.datastore:datastore-preferences:1.0.0'
    implementation("androidx.datastore:datastore-preferences-rxjava3:1.0.0")

    //admob
    implementation 'com.google.android.gms:play-services-ads:21.1.0'
    implementation platform('com.google.firebase:firebase-bom:30.2.0')

    implementation project(':nativetemplates')

    implementation("androidx.ads:ads-identifier:1.0.0-alpha04")

    // Used for the calls to addCallback() in the snippets on this page.
    implementation("com.google.guava:guava:28.0-android")

    implementation 'com.google.firebase:firebase-analytics'
    implementation("androidx.activity:activity-ktx:1.5.0")


}


from How to use the new ComponentActivity with ViewBinding and the other old AppCompatActivity components

Python Streamlit, and yfinance issues

I'll just list the two bugs I know as of now, and if you have any recommendations for refactoring my code let me know I'll go ahead and list out the few known issues as of now.

  1. yfinance is not appending the dividendYield to my dict, I did make sure that their is an actual Dividend Yield for those Symbols.

enter image description here

  1. TypeError: can only concatenate str (not "Tag") to str which I assume is something to do with how it parsing through the xml, and it ran into a tag so I am not able to create the expander, I thought I could solve it with this if statement, but instead I just don't get any expander at all.
with st.expander("Expand for stocks news"):
    for heading in fin_headings:
        if heading == str:
            st.markdown("* " + heading)
        else:
            pass

Full code for main.py:

import requests
import spacy
import pandas as pd
import yfinance as yf
import streamlit as st
from bs4 import BeautifulSoup


st.title("Fire stocks :fire:")
nlp = spacy.load("en_core_web_sm")


def extract_rss(rss_link):
    # Parses xml, and extracts the headings.
    headings = []
    response1 = requests.get(
        "http://feeds.marketwatch.com/marketwatch/marketpulse/")
    response2 = requests.get(rss_link)
    parse1 = BeautifulSoup(response1.content, features="xml")
    parse2 = BeautifulSoup(response2.content, features="xml")
    headings1 = parse1.findAll('title')
    headings2 = parse2.findAll('title')
    headings = headings1 + headings2
    return headings


def stock_info(headings):
    # Get the entities from each heading, link it with nasdaq data // if possible, and Extract market data with yfinance.
    stock_dict = {
        'Org': [],
        'Symbol': [],
        'currentPrice': [],
        'dayHigh': [],
        'dayLow': [],
        'forwardPE': [],
        'dividendYield': []
    }
    stocks_df = pd.read_csv("./data/nasdaq_screener_1658383327100.csv")
    for title in headings:
        doc = nlp(title.text)
        for ent in doc.ents:
            try:
                if stocks_df['Name'].str.contains(ent.text).sum():
                    symbol = stocks_df[stocks_df['Name'].str.contains(
                        ent.text)]['Symbol'].values[0]
                    org_name = stocks_df[stocks_df['Name'].str.contains(
                        ent.text)]['Name'].values[0]

                    # Recieve info from yfinance
                    stock_info = yf.Ticker(symbol).info
                    print(symbol)
                    stock_dict['Org'].append(org_name)
                    stock_dict['Symbol'].append(symbol)

                    stock_dict['currentPrice'].append(
                        stock_info['currentPrice'])
                    stock_dict['dayHigh'].append(stock_info['dayHigh'])
                    stock_dict['dayLow'].append(stock_info['dayLow'])
                    stock_dict['forwardPE'].append(stock_info['forwardPE'])
                    stock_dict['dividendYield'].append(
                        stock_info['dividendYield'])
                else:
                    # If name can't be found pass.
                    pass
            except:
                # Don't raise an error.
                pass

    output_df = pd.DataFrame.from_dict(stock_dict, orient='index')
    output_df = output_df.transpose()
    return output_df


# Add input field input field
user_input = st.text_input(
    "Add rss link here", "https://www.investing.com/rss/news.rss")

# Get financial headlines
fin_headings = extract_rss(user_input)

print(fin_headings)
# Output financial info
output_df = stock_info(fin_headings)
output_df.drop_duplicates(inplace=True, subset='Symbol')
st.dataframe(output_df)

with st.expander("Expand for stocks news"):
    for heading in fin_headings:
        if heading == str:
            st.markdown("* " + heading)
        else:
            pass


from Python Streamlit, and yfinance issues

Event listener for network connection change

In JavaScript (on Chrome) I am trying to perform some task whenever a user switches from one WiFi network to another (assuming that both networks are exactly the same in terms of performance).

I started with looking at the online / offline events of the Window interface and navigator.onLine but it seems like that they are not triggered when we switch networks (disconnect from one network and connect to the other) because

In Chrome and Safari, if the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other conditions return true.

you cannot assume that a true value necessarily means that the browser can access the internet. You could be getting false positives, such as in cases where the computer is running a virtualization software that has virtual ethernet adapters that are always "connected."

Ref1: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/onLine

Ref2: why navigator.onLine() return true even if my internet connection is not working?

Also, the navigator.connection object not necessarily updates to trigger navigator.connection.onchange event in case of switching networks.

I tried using WebRTC with STUN to capture public IP address to differentiate between the two connections but there is no event listener that would reliably tell that a network change has happened.

I understand that JavaScript can not directly access network info through the browser due to security reasons but is there an alternative that can be reliably used to trigger an event whenever the network is switched or there is no actual internet connectivity even though the computer is connected to the LAN/WiFi?



from Event listener for network connection change