Wednesday, 30 June 2021

Using query to fetch large data and display in HTML table with flask in Python

I have a table with 229,000 rows in sql server and i try to use select command and also flask(in python) to show recorded data in HTML. I must show all the records value of the table in my HTML table, so that relative team can see all of it's data. My code seems as below: The problem is that however I use pagination in this regard, but it takes so much time to load the table and some times the browser would freeze and stop working. I appreciate if anyone can guide me in this regards?

sql.py

from datetime import datetime 
from flask import Flask , render_template, request
import pyodbc   
import pypyodbc 
import os
from waitress import serve
from flask import render_template, redirect, request    

app = Flask(__name__)
@app.route('/index', methods=['GET', 'POST'])
def ShowResult():
    # creating connection Object which will contain SQL Server Connection    
    connection = pypyodbc.connect('Driver={SQL Server}; Server=Server; Database=DB; UID=UserID; PWD= {Password};')# Creating Cursor    
    
    cursor = connection.cursor()    
    cursor.execute("""select A,B,C,D from TABLE""")    
    Result=cursor.fetchall()
    return render_template('index.html', Result=Result)


if __name__ == '__main__':
    serve(app,port=5009)

index.html

<body oncontextmenu='return false' class='snippet-body'>
<link rel="stylesheet" href="https://cdn.datatables.net/1.10.2/css/jquery.dataTables.min.css">
<script type="text/javascript" src="https://cdn.datatables.net/1.10.2/js/jquery.dataTables.min.js"></script>
    <div class="container">
        <div class="row header" style="text-align:center;color:green">
            <h3>Bootstrap table with pagination</h3>
        </div>
        <table id="example" class="table table-striped table-bordered" style="width:100%;font-family: tahoma !important;">
          <thead>
                    <tr>
                    <th>A</th>
                    <th>B</th>
                    <th>C</th>
                    <th>D</th>
                    </tr>
                    </thead>
                     
                    <tbody>
                    
                    </tbody>
        </table>
    </div>
                    <script type='text/javascript' src='https://maxcdn.bootstrapcdn.com/bootstrap/3.3.0/js/bootstrap.min.js'></script>
                    <script type='text/javascript'>
                    //$(document).ready(function() {
    //$('#example').DataTable();
    $(document).ready(function() {
        $('#example').DataTable( {
            serverSide: true,
            ordering: false,
            searching: false,
            ajax: function ( Result, callback, settings ) {
                var out = [];
     
                for ( var i=Result.start, ien=Result.start+Result.length ; i<ien ; i++ ) {
                    out.push( [ i+'-1', i+'-2', i+'-3', i+'-4', i+'-5', i+'-6' ] );
                }
     
                setTimeout( function () {
                    callback( {
                        draw: data.draw,
                        data: out,
                        recordsTotal: 5000000,
                        recordsFiltered: 5000000
                    } );
                }, 50 );
            },
            scrollY: 200,
            scroller: {
                loadingIndicator: true
            },
        } );
    } );</script>
                                    </body>


from Using query to fetch large data and display in HTML table with flask in Python

how to time an entire process from beginning to completion and set up a termination execution time?

I have the following celery chain process:

@app.task(name='bcakground')
def background_task():    
    
    now = datetime.now() 
       
    ids = [700,701,708,722,783,799]
    for id in ids:
        my_process = chain(taks1.s(id), task2.s())
        my_process()
    
    end = datetime.now()
    return ['ENDED IN',(end-now).total_seconds()] 

Q1: How can I tell how long it takes for this task to complete from beginning to end? The result I get to (ENDED IN) doesnt reflect the truth because the chain is run in parallel and the results are a fraction of second.

Q2 is there any way to place a termination timeout in the event the entire process of background_task takes longer then 25 minutes?



from how to time an entire process from beginning to completion and set up a termination execution time?

Python Sympy : Check equality or inequality between 2 projected matrices

I reformulate more precisely this post. I try to check the equality or the inequality between 2 matrices. Each of these 2 matrices is computed slightly differently.

Actually, this is the computation of changing parameters between initial parameters for each row/colum and final parameters for the final matrix. That's why in both computations, I am using the Jacobian J formulating the derivatives between initial and final parameters :

The formula is : F_final = J^T F_initial J

The first matrix has size 31x31 and the second one has 32x32 size.

The Jacobian applied on the first matrix is 31x7 and on the second one is 32x8.

My issue is that I want to check if the 2 matrix projected 8x8 and computed by the formula :

F_final = J^T F_initial J

is equal to the matrix projected 7x7 by removing the 3th row/column of the final matrix 8x8.

I have coded the following Python Sympy script to check that :

import os, sys
import numpy as np
from sympy import *
from sympy import symbols, Matrix, Symbol, Transpose, eye, solvers


# Big_31 Fisher : symmetry by inversinf (i,j) in min(i,j) max(i,j)
FISH_Big_1_SYM = Matrix([[Symbol(f'sp_{min(i,j)}_{max(j,i)}') for i in range(1, 31+1)] for j in range(1, 31+1)])

# Big_32 Fisher : symmetry by inversinf (i,j) in min(i,j) max(i,j)
FISH_Big_2_SYM = Matrix([[Symbol(f'sp_{min(i,j)}_{max(j,i)}') for i in range(1, 32+1)] for j in range(1, 32+1)])

# Introduce neutrino
NEU_row_SYM = Matrix([[Symbol(f'neu_{min(i,j)}_{max(i,j)}') for i in range(1,2)] for j in range(1, 32+1)])
NEU_col_SYM = Matrix([[Symbol(f'neu_{min(i,j)}_{max(i,j)}') for i in range(1, 32+1)] for j in range(1, 2)])

# Introduce neutrino
J_NEU_row_SYM = Matrix([[Symbol(f'j_neu_{min(i,j)}_{max(i,j)}') for i in range(1, 8+1)] for j in range(1, 32+1)])
J_NEU_col_SYM = Matrix([[Symbol(f'j_neu_{min(i,j)}_{max(i,j)}') for i in range(1, 32+1)] for j in range(1, 8+1)])

# Temporary matrix to build FISH_Big_2_SYM 
mat_temp = Matrix([[Symbol(f'sp_{min(i,j)}_{max(j,i)}') for i in range(1,32+1)] for j in range(1, 32+1)])

mat_temp[0:2,:] = FISH_Big_2_SYM[0:2,:]
mat_temp[:,0:2] = FISH_Big_2_SYM[:,0:2]
mat_temp[2,:] = NEU_col_SYM
mat_temp[:,2] = NEU_row_SYM
mat_temp[3:32,3:32] = FISH_Big_2_SYM[2:31,2:31]

# Copy built FISH_Big_2_SYM
FISH_Big_2_SYM = np.copy(mat_temp)

# Jacobian 1 : not symmetric
J_1_SYM = Matrix([[Symbol(f'j_{min(i,j)}_{max(i,j)}') for i in range(1, 7+1)] for j in range(1, 31+1)])
print('shape_1', np.shape(J_1_SYM))

# Jacobian 2 : not symmetric
J_2_SYM = Matrix([[Symbol(f'j_{min(i,j)}_{max(i,j)}') for i in range(1, 8+1)] for j in range(1, 32+1)])
print('shape_2', np.shape(J_2_SYM))

# Jacobian 2 : row and column supplementary
J_2_row_SYM = Matrix([[Symbol(f'j_{min(i,j)}_{max(i,j)}') for i in range(1,8+1)] for j in range(1, 32+1)])
J_2_col_SYM = Matrix([[Symbol(f'j_{min(i,j)}_{max(i,j)}') for i in range(1, 32+1)] for j in range(1, 8+1)])

# Temporary matrix to build J_2_SYM 
j_temp = np.copy(J_2_SYM)

# Add row/col into J_2_SYM
j_temp = np.insert(j_temp, 2, J_NEU_row_SYM[0,:], axis=0)
j_temp = np.insert(j_temp, 2, J_NEU_col_SYM[:,0], axis=1)

# Copy built J_2_SYM
J_2_SYM = np.copy(j_temp)

print('Big_shape_1', np.shape(FISH_Big_1_SYM))
print('Big_shape_2', np.shape(FISH_Big_2_SYM))

# Projection 31x31
FISH_proj_1 = np.dot(np.dot(J_1_SYM.T,FISH_Big_1_SYM),J_1_SYM)

# Projection 32x32
FISH_proj_2 = np.dot(np.dot(J_2_SYM.T,FISH_Big_2_SYM),J_2_SYM)

# Test equality between 2 matrices
print('compare = ', FISH_proj_1.compare(FISH_proj_2))

But whereas the first matricial product seems to be valid ( FISH_proj_1 = np.dot(np.dot(J_1_SYM.T,FISH_Big_1_SYM),J_1_SYM)), the second one FISH_proj_2 = np.dot(np.dot(J_2_SYM.T,FISH_Big_2_SYM),J_2_SYM) generates an error :

shape_1 (31, 7)
shape_2 (32, 8)
Big_shape_1 (31, 31)
Big_shape_2 (32, 32)
Traceback (most recent call last):
  File "demo_projection_sympy.py", line 62, in <module>
    FISH_proj_2 = np.dot(np.dot(J_2_SYM.T,FISH_Big_2_SYM),J_2_SYM)
  File "<__array_function__ internals>", line 6, in dot
ValueError: shapes (9,33) and (32,32) not aligned: 33 (dim 1) != 32 (dim 0)

I tried to do many tests but impossible to find the error.

If anyone coud see at first sight what's wrong. The dimensions seems to be correct.



from Python Sympy : Check equality or inequality between 2 projected matrices

How to save items with AsyncStorage in one screen and then display it in a different screen

First screen is made to show the history of the saved items from the second screen.

Second screen is a QR CODE Scanner. Once the QR CODE has been scanned it shows a Modal with the information it got from the QR CODE. It also has a Button to save the information. The information that is saved is retrieved in the first screen to a FlatList.

The problem I got is that the information that is saved doesn't show up in the first screen. I only get this in the console.log [Error: [AsyncStorage] Passing null/undefined as value is not supported. If you want to remove value, Use .remove method instead. Passed value: undefined Passed key: @QR

EDIT: I forgot to write that the AsyncStorage Key is exported on the top of the file so it can be imported in the FIRST SCREEN.

CODE: SECOND SCREEN WHERE I SAVE THE INFORMATION.

const [Link, setLink] = useState([]);

const onSuccess = e => {
    setModalVisible(true);
    console.log(e);
    const QRSave = setLink(e);
    storeQRCode(QRSave);
  };

const storeQRCode = QRSave => {
    const stringifiedQR = JSON.stringify(QRSave);

    AsyncStorage.setItem(asyncStorage, stringifiedQR).catch(err => {
      console.log(err);
    });
  };

<Button
   title="Save QR"
   onPress={() => {
   scanner.reactivate();
   storeQRCode();
   showToast();
 }}
/>

CODE: FIRST SCREEN WHERE THE HISTORY IS SHOWN OF THE SAVED INFORMATION.

const [Data, setData] = useState({});

  useEffect(() => {
    restoreQRCode();
  }, []);

  const restoreQRCode = () => {
    AsyncStorage.getItem(asyncStorage)
      .then(stringifiedQR => {
        const parsedQR = JSON.parse(stringifiedQR);
        console.log(stringifiedQR);
        if (!parsedQR || typeof parsedQR !== 'object') return;

        setData(parsedQR);
      })
      .catch(err => {
        console.log(err);
      });
  };

<FlatList
  data={Data}
  keyExtractor={(_, index) => index.toString()}
  renderItem={renderItem}
 />


from How to save items with AsyncStorage in one screen and then display it in a different screen

A-frame play specific animations from sketch fab

I am creating a scene using A-frame (https://aframe.io).

I am trying to put a gltf model of a crow in my scene from sketchfab.

The model of the crow from sketchfab has two different animations however - there is a moving pose and a static pose. Since the gltf has two different animations built into the model, when I put it into my scene, the model isn't animating because it's on it's default static pose.

How can I get the crow gltf model into my scene animated so that is plays the TakeOff animation?

Just for clarification, I am looking for a way to specifically reference TakeOff animation on the gltf model so that instead of the model not animating it should animate the TakeOff animation. The link to the crow gltf model: https://sketchfab.com/3d-models/crow-d5a9b0df4da3493688b63ce42c8a83e2

Code to get the gltf model into my scene:

<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
<a-scene>
  <a-entity gltf-model="https://cdn.glitch.com/a9b3accf-725d-4891-aa13-0786dd661cab%2Fscene%20-%202021-06-25T145033.362.glb?v=1624658038392" position="20 0 -35" rotation="0 90 0" scale="1 1 1" animation-mixer="clip:Take 001; loop:10000000000000000000; timeScale: 1; crossFadeDuration: 1"></a-entity>
</a-scene>


from A-frame play specific animations from sketch fab

What exactly is a "webpack module" in webpack's terminology?

I am a newbie to webpack and currently trying to understand basic concepts. Looking at the official docs, on the page Concepts it uses module term and gives link to read more about modules on page Modules.

So on this page we have question "What is a module" but no explicit answer to that is given. Rather, it describes modules by how they "express their dependencies":

What is a webpack Module

In contrast to Node.js modules, webpack modules can express their dependencies in a variety of ways. A few examples are:

  • An ES2015 import statement

  • A CommonJS require() statement

  • An AMD define and require statement

  • An @import statement inside of a css/sass/less file.

  • An image url in a stylesheet url(...) or HTML file.

So it doesn't explicitly defines what exactly is the module and I am confused now.

Is module just a javascript file? Or is it any type of file like .css or images? Or is module some logical concept not related to physical files at all?



from What exactly is a "webpack module" in webpack's terminology?

Gtag - basic 'purchase' event not firing

I have the following event setup to fire whenever a user on my site makes a successful transaction:

window.gtag("event", "purchase", {
    id: new Date().getTime().toString(),
    value: 59.97, 
     currency: "USD",
});

I picked the event up straight from Googles enhanced ecommerce documentation:

https://developers.google.com/analytics/devguides/collection/gtagjs/enhanced-ecommerce#measure_purchases

No idea why it's not picking up my recent sales?

Note: I have enabled enhanced ecommerce in my GA admin.



from Gtag - basic 'purchase' event not firing

cvxpy is solving to produce empty answer

I am working with the following code:

import sys, numpy as np
import cvxpy as cvx

if __name__ == '__main__':
    sims = np.random.randint(20, 30, size=500)
    center = 30
    n = [500, 1]

    # minimize     p'*log(p)
    # subject to
    #              sum(p) = 1
    #              sum(p'*a) = target1

    A = np.mat(np.vstack([np.ones(n[0]), sims]))
    b = np.mat([1.0, center]).T

    x = cvx.Variable(n)
    obj = cvx.Maximize(cvx.sum(cvx.entr(x)))
    constraints = [A @ x == b]
    prob = cvx.Problem(obj, constraints)
    prob.solve()
    weights = np.array(x.value)

Here the x.value is empty. I am not sure how to modify my above setup. I am trying to readjust the mean of sims to a different value defined by variable center here.



from cvxpy is solving to produce empty answer

Can Mask R-CNN be used to detect specific objects in an EER diagram?

Currently I am training a model to detect binary relationships in an EER diagram using Mask R-CNN, but I am not sure the method which I am following is correct or not.

So what I basically need to do is I need to detect the binary relationships in a given EER diagram like below.

Please note news --> read --> User should also be detected.

Expected output

Expected output

So to achieve this what currently I am doing is by using VGG image annotator I am annotating a data set like below. I have trained a model with about 60 annotated images like below. I know 60 is definitely not enough. But to see is it working to even some extend I have then tried my trained model with some sample images. But the output which I got is not acceptable, because even though the model detects some of the relationships, sometimes it detects a bunch of entities at once as a binary relationship. Please refer to the output screenshot. So what I need to know is, is my approach correct or wrong. Can I get my expected results by annotating maybe around 2000 images like the image which I annotated below?

Sample annotated image

Sample annotated image

Actual output DETECTION_MIN_CONFIDENCE = 0.5

enter image description here



from Can Mask R-CNN be used to detect specific objects in an EER diagram?

Android - Transaction - Task is not yet complete

I found a few examples of "task not yet complete," but have not found any examples for transactions. I am using a transaction because in my application I need the operation to be able to fail if there is no internet connection. I can detect this with a transaction.

I have a Collection with Documents. I am trying to obtain the names of the documents. Sometimes the code works fine, but majority of the time I get the "task not yet complete" error. The frustrating thing is that I have a callback for "onComplete" so it's weird that the transaction isn't complete when the callback is... called.

I get the "task not yet complete exception in the onCompleteListener(). What's frustrating is that I even check to ensure if (task.isSuccessful() && task.isComplete()). Do I need to use a continuation? If so, please provide an example - I just don't quite understand it yet.

// Note: states is an ArrayList<String>
//       snapshot is a QuerySnapshot

public void getStatesList(){

    states.clear(); 
    states.add("Select A State");

    db.runTransaction(new Transaction.Function<Void>() {
        @Nullable
        @Override
        public Void apply(@NonNull Transaction transaction) {
            // Collect Snapshot data
            snapshot = db.collection("DATA").get();
            return null;
        }
    }).addOnCompleteListener(new OnCompleteListener<Void>() {
        @Override
        public void onComplete(@NonNull Task<Void> task) {

            if(task.isSuccessful() && task.isComplete()){

                try{
                    for(QueryDocumentSnapshot document : snapshot.getResult()){
                        states.add(document.getId());
                    }
                    sendResponseToActivity("Success", RESULT_OK);
                } catch (Exception e){
                    e.printStackTrace(); // Transaction is not yet complete
                    sendResponseToActivity("Fail", RESULT_OK);
                }
            }
        }
    }).addOnFailureListener(new OnFailureListener() {
        @Override
        public void onFailure(@NonNull Exception e) {
            if(e.getMessage().contains("UNAVAILABLE"))
                sendResponseToActivity("NoInternet", RESULT_OK);
            else
                sendResponseToActivity("Fail", RESULT_OK);
        }
    });

} // End getStatesList()


from Android - Transaction - Task is not yet complete

Android Webview Crash on dropdown Click

Webview loads but contains drop-down.When I click on that app crashes

Webview was working fine till last month but now its crashing on clicking dropdown in webview in all the devices.
Below is attached log.

Dependencies used are

implementation 'androidx.appcompat:appcompat:1.0.2'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
implementation('com.google.android.material:material:1.0.0')



--------- beginning of crash
2021-06-30 16:08:03.423 19438-19438/com.dogmasystems.myrentcarbooking A/libc: Fatal signal 5 (SIGTRAP), code -6 (SI_TKILL) in tid 19438 (yrentcarbooking), pid 19438 (yrentcarbooking)
2021-06-30 16:08:03.541 21456-21456/? E/crash_dump32: unknown process state: t
2021-06-30 16:08:03.611 21456-21456/? A/DEBUG: *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
2021-06-30 16:08:03.612 21456-21456/? A/DEBUG: Build fingerprint: 'samsung/gta3xlwifixx/gta3xlwifi:10/QP1A.190711.020/T510XXS5BUC4:user/release-keys'
2021-06-30 16:08:03.612 21456-21456/? A/DEBUG: Revision: '4'
2021-06-30 16:08:03.612 21456-21456/? A/DEBUG: ABI: 'arm'
2021-06-30 16:08:03.613 21456-21456/? A/DEBUG: Timestamp: 2021-06-30 16:08:03+0530
2021-06-30 16:08:03.613 21456-21456/? A/DEBUG: pid: 19438, tid: 19438, name: yrentcarbooking  >>> com.dogmasystems.myrentcarbooking <<<
2021-06-30 16:08:03.613 21456-21456/? A/DEBUG: uid: 10316
2021-06-30 16:08:03.613 21456-21456/? A/DEBUG: signal 5 (SIGTRAP), code -6 (SI_TKILL), fault addr --------
2021-06-30 16:08:03.614 21456-21456/? A/DEBUG: Abort message: '[FATAL:jni_android.cc(306)] Please include Java exception stack in crash report
    '
2021-06-30 16:08:03.614 21456-21456/? A/DEBUG:     r0  00000000  r1  00000000  r2  00000000  r3  c52df784
2021-06-30 16:08:03.614 21456-21456/? A/DEBUG:     r4  fff713a4  r5  c8934400  r6  fff70f5c  r7  fff70f78
2021-06-30 16:08:03.614 21456-21456/? A/DEBUG:     r8  eabea260  r9  0000004f  r10 fff713ac  r11 fff70f5c
2021-06-30 16:08:03.614 21456-21456/? A/DEBUG:     ip  c88d691c  sp  fff70f48  lr  c6d5c90f  pc  c69045a2
2021-06-30 16:08:03.616 21456-21456/? A/DEBUG: backtrace:
2021-06-30 16:08:03.616 21456-21456/? A/DEBUG:       #00 pc 017e55a2  /data/app/com.google.android.trichromelibrary_443021030-3FjkooxbSPI7iFUAICEe6A==/base.apk!libmonochrome.so (offset 0x645000) (BuildId: ea1d73db0ecf7ba0450e8051b6491bc520fd7df9)
2021-06-30 16:08:04.553 3687-3687/? E//system/bin/tombstoned: Tombstone written to: /data/tombstones/tombstone_08
2021-06-30 16:08:04.578 3534-3534/? E/audit: type=1701 audit(1625049484.575:48919): auid=4294967295 uid=10316 gid=10316 ses=4294967295 subj=u:r:untrusted_app:s0:c60,c257,c512,c768 pid=19438 comm="yrentcarbooking" exe="/system/bin/app_process32" sig=5
2021-06-30 16:08:04.624 21463-21463/? E/Zygote: isWhitelistProcess - Process is Whitelisted
2021-06-30 16:08:04.626 21463-21463/? E/Zygote: accessInfo : 1
2021-06-30 16:08:04.646 21463-21463/? E/ng.android.loo: Not starting debugger since process cannot load the jdwp agent.
2021-06-30 16:08:04.660 3993-4773/? E/InputDispatcher: channel '379536b com.dogmasystems.myrentcarbooking/com.dogmasystems.myrentcarbooking.ui.activities.WebViewActivity (server)' ~ Channel is unrecoverably broken and will be disposed!
2021-06-30 16:08:04.664 3993-4773/? E/InputDispatcher: channel 'b3e576b com.dogmasystems.myrentcarbooking/com.dogmasystems.myrentcarbooking.ui.activities.DashboardActivity (server)' ~ Channel is unrecoverably broken and will be disposed!
2021-06-30 16:08:04.725 3993-4017/? E/WindowManager: RemoteException occurs on reporting focusChanged, w=Window{379536b u0 com.dogmasystems.myrentcarbooking/com.dogmasystems.myrentcarbooking.ui.activities.WebViewActivity EXITING}
    android.os.DeadObjectException
        at android.os.BinderProxy.transactNative(Native Method)
        at android.os.BinderProxy.transact(BinderProxy.java:575)
        at android.view.IWindow$Stub$Proxy.windowFocusChanged(IWindow.java:829)
        at com.android.server.wm.WindowState.reportFocusChangedSerialized(WindowState.java:3691)
        at com.android.server.wm.WindowManagerService$H.handleMessage(WindowManagerService.java:5262)
        at android.os.Handler.dispatchMessage(Handler.java:107)
        at android.os.Looper.loop(Looper.java:237)
        at android.os.HandlerThread.run(HandlerThread.java:67)
        at com.android.server.ServiceThread.run(ServiceThread.java:44)


from Android Webview Crash on dropdown Click

Setting up Detox with Expo on Android

I'm trying to set up Detox with Expo on Android emulator (Genymotion) but I have an error that I can't go through....

I've installed the necessary packages :

  • Detox
  • detox-expo-helpers
  • expo-detox-hook

Downloaded the Exponent.apk on the official expo site

set up my package.json :

"detox": {
    "test-runner": "jest",
    "configurations": {
      "android": {
        "binaryPath": "bin/Exponent.apk",
        "build": "npm run android",
        "type": "android.attached",
        "device": {
          "adbName": "192.168.58.101:5555"
        }
      }
    }
  }

Set up the config.json on the e2e folder :

{
    "setupFilesAfterEnv": ["./init.ts"],
    "testEnvironment": "node",
    "reporters": ["detox/runners/jest/streamlineReporter"],
    "verbose": true
}

Set up my init.ts file :

import {cleanup, init} from "detox";
import * as adapter from "detox/runners/jest/adapter";

const config = require("../package.json").detox;

jest.setTimeout(120000);
jasmine.getEnv().addReporter(adapter);

beforeAll(async () => {
    await init(config);
});

beforeEach(async () => {
    await adapter.beforeEach();
});

afterAll(async () => {
    await adapter.afterAll();
    await cleanup();
});

When I run the tests with detox test I've the following error :

Error: '.../androidTest/Exponent/Exponent-androidTest.apk' could not be found, did you run './gradlew assembleAndroidTest' ?

How is generated this androidTest file with Expo ? Did I made something wrong ?

EDIT :

I've also try to use the .sh script to fetch the Exponent.apk file :

#!/bin/bash -e

# query expo.io to find most recent ipaUrl
IPA_URL=`curl https://expo.io/--/api/v2/versions |  python -c 'import sys, json; print json.load(sys.stdin)["androidUrl"]'`

# download tar.gz
TMP_PATH=bin/Exponent.apk
wget -O $TMP_PATH $IPA_URL


from Setting up Detox with Expo on Android

Proguard causing runtime exception with Android Navigation Component

I'm experiencing this crash when using proguard after integrating the NavigationComponent (android.arch.navigation:navigation-fragment-ktx:1.0.0-alpha01) into my project with target and compile sdk of 27

    2018-05-16 12:13:14.044 24573-24573/com.mypackage.myapp.x E/AndroidRuntime: FATAL EXCEPTION: main
    Process: com.mypackage.myapp.x, PID: 24573
    java.lang.RuntimeException: Unable to start activity ComponentInfo{com.mypackage.myapp.x/com.mypackage.myapp.MainActivity}: android.view.InflateException: Binary XML file line #16: Binary XML file line #16: Error inflating class fragment
        at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2925)
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3060)
        at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78)
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:110)
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:70)
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1800)
        at android.os.Handler.dispatchMessage(Handler.java:106)
        at android.os.Looper.loop(Looper.java:164)
        at android.app.ActivityThread.main(ActivityThread.java:6649)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:826)
     Caused by: android.view.InflateException: Binary XML file line #16: Binary XML file line #16: Error inflating class fragment
     Caused by: android.view.InflateException: Binary XML file line #16: Error inflating class fragment
     Caused by: java.lang.RuntimeException: Exception inflating com.mypackage.myapp.x:navigation/nav_graph line 7
        at androidx.navigation.j.a(Unknown Source:124)
        at androidx.navigation.d.a(Unknown Source:4)
        at androidx.navigation.fragment.NavHostFragment.a(Unknown Source:88)
        at android.support.v4.app.Fragment.l(Unknown Source:15)
        at android.support.v4.app.m.a(Unknown Source:369)
        at android.support.v4.app.m.b(Unknown Source:7)
        at android.support.v4.app.m.a(Unknown Source:74)
        at android.support.v4.app.m.onCreateView(Unknown Source:216)
        at android.support.v4.app.j.a(Unknown Source:4)
        at android.support.v4.app.h.a(Unknown Source:2)
        at android.support.v4.app.d.onCreateView(Unknown Source:0)
        at android.support.v4.app.h.onCreateView(Unknown Source:0)
        at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:780)
        at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:730)
        at android.view.LayoutInflater.rInflate(LayoutInflater.java:863)
        at android.view.LayoutInflater.rInflateChildren(LayoutInflater.java:824)
        at android.view.LayoutInflater.rInflate(LayoutInflater.java:866)
        at android.view.LayoutInflater.rInflateChildren(LayoutInflater.java:824)
        at android.view.LayoutInflater.inflate(LayoutInflater.java:515)
        at android.view.LayoutInflater.inflate(LayoutInflater.java:423)
        at android.view.LayoutInflater.inflate(LayoutInflater.java:374)
        at android.support.v7.app.AppCompatDelegateImplV9.b(Unknown Source:23)
        at android.support.v7.app.d.setContentView(Unknown Source:4)
        at com.mypackage.myapp.MainActivity.onCreate(Unknown Source:12)
        at android.app.Activity.performCreate(Activity.java:7130)
        at android.app.Activity.performCreate(Activity.java:7121)
        at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1262)
        at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2905)
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3060)
        at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78)
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:110)
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:70)
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1800)
        at android.os.Handler.dispatchMessage(Handler.java:106)
        at android.os.Looper.loop(Looper.java:164)
        at android.app.ActivityThread.main(ActivityThread.java:6649)
        at java.lang.reflect.Method.invoke(Native Method)
    2018-05-16 12:13:14.044 24573-24573/com.mypackage.myapp.x E/AndroidRuntime:     at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:826)
     Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: com.mypackage.myapp.fragments.MainFragment
        at androidx.navigation.fragment.b$a.a(Unknown Source:58)
        at androidx.navigation.fragment.b$a.a(Unknown Source:19)
        at androidx.navigation.j.a(Unknown Source:16)
        at androidx.navigation.j.a(Unknown Source:133)
        at androidx.navigation.j.a(Unknown Source:31)
            ... 38 more
     Caused by: java.lang.ClassNotFoundException: com.mypackage.myapp.fragments.MainFragment
        at java.lang.Class.classForName(Native Method)
        at java.lang.Class.forName(Class.java:453)
        at androidx.navigation.fragment.b$a.a(Unknown Source:45)
            ... 42 more
     Caused by: java.lang.ClassNotFoundException: Didn't find class "com.mypackage.myapp.fragments.MainFragment" on path: DexPathList[[zip file "/system/framework/org.apache.http.legacy.boot.jar", zip file "/data/app/com.mypackage.myapp.x-ysts055HQTtJTv5J2uej3g==/base.apk"],nativeLibraryDirectories=[/data/app/com.mypackage.myapp.x-ysts055HQTtJTv5J2uej3g==/lib/x86, /system/lib]]
        at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
            ... 45 more

It might be because AAPT is not yet producing keep rules for the navigation component?



from Proguard causing runtime exception with Android Navigation Component

How to update HTML in inactive chrome tab with extension

I'm currently trying to make a Chrome extension, and I've ran into a problem where I don't know how to update HTML from an inactive tab. I'm trying to periodically access the HTML of a Google Meet in an inactive tab, to detect when people leave or join the call. However, document.querySelector only works when the tab is focused, as if it's not, it will just keep giving the same info from the last time it was focused, even if people have left or joined the call since then. Is there any way to detect these changes without having to focus the tab? Here is what I've tried in my code:

background.js

meetTab = []
// check for google meet tabs
function query()
{
    meetTab = []
    chrome.tabs.query({url: "https://meet.google.com/*-*"},function(tabs)
    {
        tabs.forEach(function(tab)
        {
            meetTab.push(tab)
        });
    })
}
chrome.tabs.onCreated.addListener(query)
chrome.tabs.onRemoved.addListener(query)
chrome.runtime.onStartup.addListener(query)
setInterval(send, 3000)
// execute script for each google meet tab every 3 sec
function send()
{

    meetTab.forEach(function(tab)
    {
        chrome.tabs.executeScript(tab.id, {file: "check.js"})
    })
}
// this part only prints updated info when meet tab is active
chrome.runtime.onMessage.addListener(function(message, sender, sendResponse) {
    console.log(message)
});

check.js

// element to be tracked
thing = document.querySelector("#ow3 > div.T4LgNb > div > div:nth-child(9) > div.crqnQb > div.rG0ybd.xPh1xb.P9KVBf.LCXT6 > div.TqwH9c > div.SZfyod > div > div > div:nth-child(2) > div > div")
chrome.runtime.sendMessage({msg:thing.innerText})

manifest.json

{
  "manifest_version": 2,
  "name": "Meet Kicker",
  "version": "1.0",
  "icons": {"128": "bigicon.png"},
  "browser_action": {
    "default_icon": "smallicon.png",
    "default_popup": "popup.html"
  },
  "permissions": ["storage", "tabs", "<all_urls>"],

  "background":
  {
    "scripts": [
      "background.js"
    ]
  }
}

Updated check.js trying to use MutationObserver

thing = document.querySelector("#ow3 > div.T4LgNb > div > div:nth-child(9) > div.crqnQb > div.rG0ybd.xPh1xb.P9KVBf.LCXT6 > div.TqwH9c > div.SZfyod > div > div > div:nth-child(2) > div > div")

var observer = new MutationObserver(function(mutations) {
  mutations.forEach(function(mutation) {
            chrome.runtime.sendMessage({msg:thing.innerText})

      });
    });

observer.observe(thing, { characterData: true, attributes: false, childList: false, subtree: true });


from How to update HTML in inactive chrome tab with extension

How to split the Cora dataset to train a GCN model only on training part?

I am training a GCN (Graph Convolutional Network) on Cora dataset.

The Cora dataset has the following attributes:

Number of graphs: 1
Number of features: 1433
Number of classes: 7
Number of nodes: 2708
Number of edges: 10556
Number of training nodes: 140
Training node label rate: 0.05
Is undirected: True

Data(edge_index=[2, 10556], test_mask=[2708], train_mask=[2708], val_mask=[2708], x=[2708, 1433], y=[2708])

Since my code is very long, I only put the relevent parts of my code here. Firstly, I split the Cora dataset as follows:

def to_mask(index, size):
    mask = torch.zeros(size, dtype=torch.bool)
    mask[index] = 1
    return mask

def cora_splits(data, num_classes):
    indices = []

    for i in range(num_classes):
        # returns all indices of the elements = i from data.y tensor
        index = (data.y == i).nonzero().view(-1)

        # returns a random permutation of integers from 0 to index.size(0).
        index = index[torch.randperm(index.size(0))]

        # indices is a list of tensors and it has a length of 7
        indices.append(index)

    # select 20 nodes from each class for training
    train_index = torch.cat([i[:20] for i in indices], dim=0)

    rest_index = torch.cat([i[20:] for i in indices], dim=0)
    rest_index = rest_index[torch.randperm(len(rest_index))]

    data.train_mask = to_mask(train_index, size=data.num_nodes)
    data.val_mask = to_mask(rest_index[:500], size=data.num_nodes)
    data.test_mask = to_mask(rest_index[500:], size=data.num_nodes)

    return data

The train is as follows (taken from here with few modifications):


def train(model, optimizer, data, epoch):
    t = time.time()
    model.train()
    optimizer.zero_grad()
    output = model(data)
    loss_train = F.nll_loss(output[data.train_mask], data.y[data.train_mask])
    acc_train = accuracy(output[data.train_mask], data.y[data.train_mask])
    loss_train.backward()
    optimizer.step()

    loss_val = F.nll_loss(output[data.val_mask], data.y[data.val_mask])
    acc_val = accuracy(output[data.val_mask], data.y[data.val_mask])

def accuracy(output, labels):
    preds = output.max(1)[1].type_as(labels)
    correct = preds.eq(labels).double()
    correct = correct.sum()
    return correct / len(labels)

When I ran my code with 200 epochs in 10 runs I gained:

tensor([0.7690, 0.8030, 0.8530, 0.8760, 0.8600, 0.8550, 0.8850, 0.8580, 0.8940, 0.8830])

Val Loss: 0.5974, Test Accuracy: 0.854 ± 0.039

where each value in the tensor belongs to the model accurracy of each run and the mean accuracy of all those 10 runs is 0.854 with std ± 0.039.

As it can be observed, the accuracy from the first run to the 10th one is increasing substantially. Therefore, I think the model is overfitting. One reason of overfitting is that in the code, the test data has been seen by the model in the training time since in the train function, there is a line output = model(data) so the model is trained over the whole data. What I intend to do is to train my model only on a part of the data (something similar to data[data.train_mask]) but the problem is I cannot pass data[data.train_mask], due to the forward function of the GCN model (from this repository):

def forward(self, data):
        x, edge_index = data.x, data.edge_index
        x = F.relu(self.conv1(x, edge_index))
        for conv in self.convs:
            x = F.relu(conv(x, edge_index))
        x = F.relu(self.lin1(x))
        x = F.dropout(x, p=0.5, training=self.training)
        x = self.lin2(x)
        return F.log_softmax(x, dim=-1)

If I pass data[data.train_mask] to the GCN model, then in the above forward function in line x, edge_index = data.x, data.edge_index, x and edge_index cannot be retrieved from data[data.train_mask]. Therefore, I need to find a way to split the Cora dataset in a way that I can pass a specefic part of it with the nodes, edge-index and other attributes to the model. My question is how to do it?

Also, any suggestion about k-fold cross validation is much appreciated.



from How to split the Cora dataset to train a GCN model only on training part?

read all data from two specific column in Realm Database (Android)

iam using below mentioned code for returning all the columns from Realm database. but i want to fetch only First name & Age column data from same table. Kindly help me to achieve this. thanks in advance.

RealmResults<Data>data1=realm2.where(Data.class).findAll();
        for (Data DatamNew :data1)
        {
            arrayList.add(DatamNew);
        }


from read all data from two specific column in Realm Database (Android)

Problem using Flask 2.0.0 with jython gradle plugin

I am new to android development, but not to programming. I am expert with python but I can't manage to use java according to my wishes. So I used the jython gradle plugin. In my main.py, I have the following code:

from flask import Flask

app = Flask(name)

@app.route("/") def home(): return "hello world"

if name == "main": app.run(debug=False, port=8118)

Then in my java code, I create a webview and load the url "http://localhost:8118". But it throws out the error, net::ERR_CONNECTION_REFUSED. I thing the jython plugin fails to create the flask app and run it in the android phone and access it in the webview. Well it can be some other problem too. In my build.gradle( Project ):

task testJython(type:jython.JythonTask) {
jython{
pypackage 'Flask:2.0.0'
}
script file("C:/Users/username/AndroidStudioProjects/my-app-name/app/src/main/assets/main.py")
}

My main.py is in the assets folder. How to solve this problem?



from Problem using Flask 2.0.0 with jython gradle plugin

jQuery: access binary AJAX response inside complete function or access XHR object outside of callback functions?

There are many questions on handling binary AJAX responses with jQuery like this, this, and this. None help.

Goals: (1) dynamically determine if a response contains binary data, and handle it differently from text; (2) analyze response headers; and (3) do both from a single function.

Ideally, this can be done from the complete function, but the complete function doesn't have access to the XHR object. jqXHR is supposed to be a superset of the XHR object, but XHR.response is blank.

The return value of $.ajax(settings) contains the binary data, but the XHR object is no longer available -- so it seems not possible to analyze the response headers.

Is it possible to access the binary AJAX response inside the complete callback or access the XHR object outside of the callback functions? The

// Assume @data contains body and URL values.

let settings = {
    url: data.url,
    method: "post",
    timeout: 0,
    contentType: false,
    processData: false,
    data: data.body,
    xhr: function() {
                // Create XMLHttpRequest object.
                let xhr = new XMLHttpRequest();

                // Handle event for when response headers arrive.
                xhr.onreadystatechange = function() {
                    if (xhr.readyState == 2) {
                        if (xhr.status == 200) {
                            xhr.responseType = 'blob';
                        } else {
                            xhr.responseType = 'text';
                        }
                    }
                };

                return xhr;
    },
    complete: function(xhr, status, error) {
        // Can access response headers but not binary response here.
    }
};

let response = await $.ajax(settings);
// @response is blob but cannot access XHR object here.


from jQuery: access binary AJAX response inside complete function or access XHR object outside of callback functions?

Remove random parts of an object (Chaos Monkey Style)

I have a JavaScript object e.g.:

const testFixture = {
  a: [
    {b:1},
    {b:2},
    {b:3},
  ],
  b: {c: {d: 44, e: "foo", f: [1,2,3]}}
  c: 3,
  d: false,
  f: "Blah",
}

I'd like to have a function I could pass this object to that would mutate it to remove random properties from it, so that I can test whether the thing that uses this object displays an error state, rather than silently erroring.


Edit:

To be clear, I mean any deeply nested property. e.g. it might remove a.b.c.d.e.f[1] or a[2].b


Edit 2:

Here's a buggy solution I'm working on based on ideas from Eureka and mkaatman's answers.

It seems to be changing key names to "undefined" which I wasn't expecting. It's also changing numbers to {} which I wasn't expecting. Not sure why.

var testFixture2 = {
  a: [{
      b: 1, c: 2
    },
    {
      b: 2, c: 2
    },
    {
      b: 3, c: 2, d: "bar"
    },
  ],
  b: {
    c: {
      d: 44,
      e: "foo",
      f: [1, 2, 3]
    }
  },
  c: 3,
  d: false,
  f: "Blah"
};


function getRandomIndex(max) {
  return Math.floor(Math.random() * max);
}

function chaosMonkey(thing) {
  if (typeof thing === "object") {
    console.log("object", Object.keys(thing).length, thing);
    const newlyDeformedObject = { ...thing};
    // Make a list of all the keys
    const keys = Object.keys(thing);
    // Choose one at random
    const iKey = getRandomIndex(keys.length);
    let target = newlyDeformedObject[keys[iKey]];
  
    const shouldDelete = getRandomIndex(3) === 0;
    if (shouldDelete) {
      delete target;
      console.log("Object deleted", keys[iKey]);
    } else {
     console.log("+++ Going deeper", thing);
      newlyDeformedObject[keys[iKey]] = chaosMonkey({ ...newlyDeformedObject[keys[iKey]] });
    }
    return newlyDeformedObject;
  } else if (typeof thing === "array") {
    console.log(array);
    const iKey = getRandomIndex(thing.length);
    const shouldDelete = getRandomIndex(3) === 0;
    if (shouldDelete) {
      delete array[iKey];
      console.log("Array deleted", iKey);
    } else {
      array[iKey] = chaosMonkey(array[iKey]);
      return array;
    }
  } else {
    //@todo do something bad based on type e.g. number -> NaN, string -> '', but these are less likely to break something
    delete thing;
    return;
  }
}

console.log(JSON.stringify(chaosMonkey(testFixture2), null, 2));

NB: the chances of any object key or array item being recursed into are equal, in order to make modifications equally likely anywhere in the object.


Edit 3:

Additional Requirement:

  • It MUST always remove at least one thing.

Bonus points for:

  1. ways to control the number of things that get deleted

  2. any way to limit which properties get deleted or recursed into. i.e. allow/deny lists, where:

    • allowRemovalList = properties that it's ok to remove
    • denyRemovalList = properties that it's not ok to remove

(It could be that you have some properties that it's ok to remove entirely, but they should not be recursed into and inner parts of them removed.)

NB: Originally I asked for whitelist/blacklist but this caused confusion (and I wouldn't want anyone copying this code to be surprised when they use it) and some answers have implemented it so that blacklist = properties to always remove. I won't penalise any answer for that (and it's trivial to change anyway).



from Remove random parts of an object (Chaos Monkey Style)

Django ReactJS: Pass array from JavaScript frontend to Django Python backend

I've made a website with Django and React JS. I'm trying to pass an array pict from my JavaScript frontend to Django, the Python backend.

let pict = [];

pictureEl.addEventListener(`click`, function () {
  console.log(`Take Pic`);
  pict += webcam.snap();
  console.log(pict);
});

pict is an array of images taken by the camera. How would I pass it to the backend?



from Django ReactJS: Pass array from JavaScript frontend to Django Python backend

How to implement text wrapping in svgwrite?

I'm using svgwrite in python to generate output based off my Tensorflow model to output simulated handwritten text. My current setup requires an array of strings to represent line breaks however the generated text size isn't consistent and sometimes renders awkward spacing after the last word in the line such as this

Is it possible to add text-wrapping to a single long line that would automatically add line breaks when the current line reaches the max width given? A Google Search brought me to the svgwrite page and suggested to use TextArea but the examples given are HTML.

    def _draw(self, strokes, lines, filename, stroke_colors=None, \
          stroke_widths=None, background_color='white'):

    lines = [
        "Lorem ipsum dolor sit amet, consectetur adipiscing elit,",
        "sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.",
        "Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris",
        "nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in",
        "reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur."
    ]

    stroke_colors = stroke_colors or ['black']*len(lines)
    stroke_widths = stroke_widths or [2]*len(lines)

    line_height = 35
    view_width = 152.4
    view_height = 101.6

    dwg = svgwrite.Drawing(filename=filename)
    dwg.viewbox(width=view_width, height=view_height)
    dwg.add(dwg.rect(insert=(0, 0), size=('153mm', '102mm'), fill=background_color))

    for i in range(3):
            
        
        initial_coord = np.array([30,-((i*450)+25)])
        strokesc = self._sample(lines, [1 for i in lines], [7 for i in lines]);
        
        for offsets, line, color, width in zip(strokesc, lines, stroke_colors, stroke_widths):

            if not line:
                initial_coord[1] -= line_height
                continue
            offsets[:, :2] *= random.randint(150, 190)/100
            strokesc = drawing.offsets_to_coords(offsets)
            strokesc = drawing.denoise(strokesc)
            strokesc[:, :2] = drawing.align(strokesc[:, :2])

            strokesc[:, 1] *= -1
            strokesc[:, :2] -= strokesc[:, :2].min() + initial_coord

            prev_eos = 1.0
            p = "M{},{} ".format(0, 0)
            for x, y, eos in zip(*strokesc.T):
                p += '{}{},{} '.format('M' if prev_eos == 1.0 else 'L', x, y)
                prev_eos = eos
            path = svgwrite.path.Path(p)
            path = path.stroke(color=color, width=width, linecap='round').fill("none")
            dwg.add(path)

            initial_coord[1] -= line_height

    dwg.save()

This is my current solution in python which outputs the example above



from How to implement text wrapping in svgwrite?

Why does using merge function in two different dataframes results me more rows?

I have two dataframes of the shape: (4000,3) (2000,3) , with the bellow info and cols:

df1:

imo speed length
1 1 4
1 2 4
2 10 10
2 12 10

df2:

imo dwt name
1 52 test1
2 62 test2
3 785 test3
4 353 test4

i would like to add column dwt of df2 to df1 based on the same imo.

imo speed length dwt
1 1 4 52
1 2 4 52
2 10 10 62
2 12 10 62

but when i am trying to do pd.merge(df1,df2, on = 'imo', how = 'inner') , the result is much more rows than the original shape of df1 how is that possible?



from Why does using merge function in two different dataframes results me more rows?

pandas- kernel restarting: the kernel for .ipynb appears to have died. it will restart automatically

Update

I ran a docker-container for a jupyter-notebook, however when running a pandas-based block, after a few seconds the system returns:

kernel restarting: the kernel for .ipynb appears to have died. it will restart automatically.

With just the option of restarting the kernel.

Here's the block of code where the message arises:

import pandas as pd


def remove_typos(string):
    
    string=str(string)
    string=str(string).replace('≤', '')
    string=str(string).replace('+', '')
    
    # if "%" detected then convert to numeric format
    if "%" in string: 
        string=string.replace('%', '')
        string=float(string)/100
        
    else:
        pass
        
    return string


data = {k: v.replace([r'\+', '≤'], '', regex=True) for k, v in data.items()}
data = {k: v.applymap(remove_typos) for k, v in data.items()}

  • What I already tried?
  1. I already tried to run pip install pandas in the container cli: Which returns me the next message:

enter image description here

  1. Tried to give more local memory to the container:

enter image description here

  1. Tried to update conda and reinstall all packages from anaconda prompt:
# conda config --set quiet True
# conda update --force conda

#conda install pandas

In all cases, the outcome was the same.

Additional notes:

  • total processor utilization reachs 100%
  • function is applied over 10,000+ cells

Are there any other options to overcome this issue?

data demo

  • Original df keeps same format but is so much bigger in size.
data = {'dataframe_1':pd.DataFrame({'col1': ['John', 'Ashley'], 'col2': ['+10', '-1']}), 'dataframe_2':pd.DataFrame({'col3': ['Italy', 'Brazil', 'Japan'], 'col4': ['Milan', 'Rio do Jaineiro', 'Tokio'], 'percentage':['+95%', '≤0%', '80%+']})}

session info

{'commit_hash': '2486838d9',
 'commit_source': 'installation',
 'default_encoding': 'UTF-8',
 'ipython_path': '/usr/local/lib/python3.6/site-packages/IPython',
 'ipython_version': '7.16.1',
 'os_name': 'posix',
 'platform': 'Linux-5.10.25-linuxkit-x86_64-with-debian-10.9',
 'sys_executable': '/usr/local/bin/python',
 'sys_platform': 'linux',
 'sys_version': '3.6.13 (default, May 12 2021, 16:40:31) \n[GCC 8.3.0]'}


from pandas- kernel restarting: the kernel for .ipynb appears to have died. it will restart automatically

Tuesday, 29 June 2021

A-frame multiple animations with camera

I have some code for a camera using A-frame (https://aframe.io) and I'm wondering how I can add multiple sequential animations. I would like it so that when my first animation is finished, another animation will trigger and the camera will move 5 spaces to the left after the first animation is complete. How can this be done? My current code:

  <a-entity id="rig" position="0 1.6 0"  animation="property: position; delay: 2000; dur: 7000; easing: linear; to: 0 1.6 -25" >
  <a-entity id="camera" wasd-controls camera look-controls></a-entity>
</a-entity>


from A-frame multiple animations with camera

Error on android device when using firebase unity sdk firestore package

I'm trying to use the firebase unity sdk in my android app. Specifically the FirebaseFirestore.unitypackage. I can get everything working running my app directly thru unity. But when I do an android build and deploy to my actual device or an emulator I get this error.

java.lang.ClassNotFoundException: Didn't find class "com/google/firebase/firestore/internal/cpp/QueryEventListener" on path: DexPathList[[zip file "/data/user/0/my.package.name/cache/firestore_resources_lib.jar"],nativeLibraryDirectories=[/vendor/lib, /system/lib]]

and then later down the log it says something similar...

E/firebase: Java class com/google/firebase/firestore/internal/cpp/QueryEventListener not found.  Please verify the AAR which contains the com/google/firebase/firestore/internal/cpp/QueryEventListener class is included in your app

I've attempted to "Build and Run" to an emulator and an actual device directly from unity. I've also attempted to export the project from unity and import it into android studio and create the apk that way, but it ends with same result.

I'm using unity 2020.3.12f1. I'm using the Android SDK Tools installed with Unity

C:\Program Files\Unity\2020.3.12f1\Editor\Data\PlaybackEngines\AndroidPlayer\SDK

I am targeting Android 9.0 API Lebel 28

I'm really not sure what is going on. I'm using another unity firebase package (the auth package) without any problems. But the second I try to use Firestore I start getting this error.

Any help would be appreciated I've been staring at this for days.

Thanks you



from Error on android device when using firebase unity sdk firestore package

Why does Tensorflow Bernoulli distribution always return 0?

I am working on classifying texts based on word occurrences. One of the steps is to estimate the probability of a particular text for each possible class. To do this, I am given NSAMPLES of texts from a vocabulary of NFEATURES words, each labelled with one of NLABELS class labels. From this, I construct a binary occurrence matrix where entry(sample,feature) is 1 iff text "sample" contains the word encoded by "feature".

From the occurrence matrix, we can construct a matrix of conditional probabilities and then smooth this so the probabilities are neither 0.0 or 1.0, using the following code (copied from Coursera notebook):

def laplace_smoothing(labels, binary_data, n_classes):
    # Compute the parameter estimates (adjusted fraction of documents in class that contain word)
    n_words = binary_data.shape[1]
    alpha = 1 # parameters for Laplace smoothing
    theta = np.zeros([n_classes, n_words]) # stores parameter values - prob. word given class
    for c_k in range(n_classes): # 0, 1, ..., 19
        class_mask = (labels == c_k)
        N = class_mask.sum() # number of articles in class
        theta[c_k, :] = (binary_data[class_mask, :].sum(axis=0) + alpha)/(N + alpha*2)
    return theta

To see the problem, here is code to mock up inputs and call for the result:

import tensorflow_probability as tfp
tfd = tfp.distributions

NSAMPLES = 2000   # Size of corpus
NFEATURES = 10000 # Number of words in corpus
NLABELS = 10      # Number of classes
ONE_PROB = 0.02   # Probability that binary_datum will be 1

def mock_binary_data( nsamples, nfeatures, one_prob ):
    binary_data = ( np.random.uniform( 0, 1, ( nsamples, nfeatures ) ) < one_prob ).astype( 'int32' )
    return binary_data

def mock_labels( nsamples, nlabels ):
    labels = np.random.randint( 0, nlabels, nsamples )
    return labels

binary_data = mock_binary_data( NSAMPLES, NFEATURES, ONE_PROB )
labels = mock_labels( NSAMPLES, NLABELS )
smoothed_data = laplace_smoothing( labels, binary_data, NLABELS )

bernoulli = tfd.Independent( tfd.Bernoulli( probs = smoothed_data ), reinterpreted_batch_ndims = 1 )

test_random_data = mock_binary_data( 1, NFEATURES, ONE_PROB )[ 0 ]
bernoulli.prob( test_random_data )

When I execute this, I get:

<tf.Tensor: shape=(10,), dtype=float32, numpy=array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>

that is, all the probabilities are zero. Some step here is incorrect, can you please help me find it?



from Why does Tensorflow Bernoulli distribution always return 0?

Hidden import Tensorflow package not found when using Pyinstaller

I am trying to convert my object detector python project into an executable file but I always get these warnings and my executable file would not run.

64422 WARNING: Hidden import "tensorflow._api.v2.compat.v1.estimator" not found!
64425 WARNING: Hidden import "tensorflow._api.v2.compat.v2.compat.v2.keras.metrics" not found!
64843 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.keras.applications.resnet50" not found!
64844 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.keras.applications.resnet" not found!
64845 WARNING: Hidden import "tensorflow._api.v2.compat.v2.compat.v2.keras.backend" not found!
64857 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.estimator.tpu" not found!
64859 WARNING: Hidden import "tensorflow._api.v2.compat.v2.compat.v1.keras.applications.mobilenet" not found!
64892 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.keras.applications.vgg19" not found!
64894 WARNING: Hidden import "tensorflow._api.v2.compat.v2.compat.v2.keras.preprocessing.text" not found!
64896 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.estimator.tpu.experimental" not found!
64899 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.keras.applications.resnet_v2" not found!
64956 WARNING: Hidden import "tensorflow._api.v2.compat.v2.compat.v1.keras.wrappers.scikit_learn" not found!
64957 WARNING: Hidden import "tensorflow._api.v2.compat.v2.compat.v2.keras.applications.resnet50" not found!
64958 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.keras.wrappers.scikit_learn" not found!
65073 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v2.keras.applications.imagenet_utils" not found!
65073 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v2.keras.datasets.cifar100" not found!
65238 WARNING: Hidden import "tensorflow._api.v2.compat.v1.compat.v1.keras.optimizers" not found!

My project structure is

- project folder
  - venv
  - main.py
  - detect.py

Inside the detect.py I have the following imports

import tensorflow as tf
from tensorflow.python.saved_model import tag_constants
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

The tensorflow module can be found in the site-packages inside the venv folder

The solutions that I have tried are adding the --hidden-import tensorflow flag as said in this Question

pyinstaller --hidden-import tensorflow --onefile main.py

I have also tried this approach by creating a hooks directory with hook-tensorflow.py file

- project folder
   - venv
   - hooks
      - hook-tensorflow.py
   - main.py
   - detect.py

hook-tensorflow.py

from PyInstaller.utils.hooks import collect_all


def hook(hook_api):
    packages = [
        'tensorflow'
    ]
    for package in packages:
        datas, binaries, hiddenimports = collect_all(package)
        hook_api.add_datas(datas)
        hook_api.add_binaries(binaries)
        hook_api.add_imports(*hiddenimports)

And then issuing this terminal command

pyinstaller --additional-hooks-dir=hooks --onefile main.py

But still, the same warning still persists and my executable file would not run.



from Hidden import Tensorflow package not found when using Pyinstaller

HTML/JS form not importing/exporting txt correctly

Problem

Summary

HTML form designed for offline use is not exporting/importing data correctly. Is there a noob-friendly solution?

Details

This form was designed so that users could open the HTML form, fill it out, and export the data as a pip-delimited .txt file.

It can import/export various fields like Name, Gender and City. However, it cannot import/export Snack Preferences and Dinner Preferences. In our dataset, this means that member 'Ravenous Kitty' will be served Soup instead of Bird, as well as cheese (unwanted).

The initial form is empty:

Form initial

You can fill the form out and click the button 'save data to file':

Form filled out

But when you use 'Choose File' button to import that same data, the entries become mixed up:

Form info loaded

Code

Encodedna code has been repurposed to fit the problem. It consists of three sections:

  1. HTML form
  2. Data export section
  3. Data import section
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <title>Offline Form 0.1</title>
</head>

<body>

    <div>
        <input type="file" name="inputfile" id="inputfile">
    </div>

<fieldset>
<legend>Elite Club Member</legend>

        <!-- Static -->
        <b>Name:</b>
        <div id="Name"></div>
        <b>DoB:</b>
        <input type="date" id="DoB"/><br>

        <!-- dropdown, prefilled -->
        <b>Gender:</b>
        <select id="Gender" name="Gender">
            <option value="Male">Male</option>
            <option value="Female">Female</option>
            <option value="Other">Other</option>
        </select>
        <br>

        <!-- prefilled, but editable-->
        <b>City:</b>
        <input type="text" id="City"/><br>
</fieldset>

<!-- PROBLEM -->
<fieldset>
<legend>Snack Preferences</legend>

    <input type="checkbox" id="cheeseLover" name="cheeseLover">Member loves cheese.
    <br>

    <input type="checkbox" id="milkLover" name="milkLover">Member loves milk.
    <br>
</fieldset>

<!-- PROBLEM -->
<fieldset>
<legend>Dinner Preference</legend>

    <ol>
        <li>
            <legend>Choose among the various dinner options</legend>
            <p><label> <input type="radio" id="dinnerPizza" name="dinnerOptions" value="dinnerPizza">Pizza</label></p>
            <p><label> <input type="radio" id="dinnerSalad" name="dinnerOptions" value="dinnerSalad">Salad</label></p>
            <p><label> <input type="radio" id="dinnerBird" name="dinnerOptions" value="dinnerBird">Bird</label></p>
            <p><label> <input type="radio" id="dinnerSoup" name="dinnerOptions" value="dinnerSoup">Soup</label></p>
        </li>
    </ol>
</fieldset>

<!-- Save button -->
<div>
  <input type="button" id="bt" value="Save data to file" onclick="saveFile()" />
</div>

    <script type="text/javascript">
        document.getElementById('inputfile').addEventListener('change', function() {

            var fr=new FileReader();
            fr.onload=function(){
                var output_data=fr.result;
                var output_data_lines = output_data.split('\n');

                for(var i = 0; i < output_data_lines.length; i++){

                        <!-- Patient Info -->
                        if (output_data_lines[i].split('|')[0] == 'Name') {
                            document.getElementById('Name').textContent = output_data_lines[i].split('|')[1];
                        }
                        else if (output_data_lines[i].split('|')[0] == 'DoB') {
                            document.getElementById('DoB').value = output_data_lines[i].split('|')[1];
                        }
                        else if (output_data_lines[i].split('|')[0] == 'Gender') {
                            document.getElementById('Gender').value = output_data_lines[i].split('|')[1];
                        }

                        <!-- Extra Info -->
                        else if (output_data_lines[i].split('|')[0] == 'City') {
                            document.getElementById('City').value = output_data_lines[i].split('|')[1];
                        }

                        <!-- PROBLEM -->
                        <!-- Snack Preference -->
                        else if (output_data_lines[i].split('|')[0] == 'cheeseLover') {
                            document.getElementById('cheeseLover').checked = output_data_lines[i].split('|')[1];
                        }
                        else if (output_data_lines[i].split('|')[0] == 'milkLover') {
                            document.getElementById('milkLover').checked = output_data_lines[i].split('|')[1];
                        }

                        <!-- PROBLEM -->
                        <!-- Dinner Preferences -->
                        else if (output_data_lines[i].split('|')[0] == 'dinnerPizza') {
                            document.getElementById('dinnerPizza').checked = output_data_lines[i].split('|')[1];
                        }
                        else if (output_data_lines[i].split('|')[0] == 'dinnerSalad') {
                            document.getElementById('dinnerSalad').checked = output_data_lines[i].split('|')[1];
                        }
                        else if (output_data_lines[i].split('|')[0] == 'dinnerBird') {
                            document.getElementById('dinnerBird').checked = output_data_lines[i].split('|')[1];
                        }
                        else if (output_data_lines[i].split('|')[0] == 'dinnerSoup') {
                            document.getElementById('dinnerSoup').checked = output_data_lines[i].split('|')[1];
                        }
                }
            }
            fr.readAsText(this.files[0]);
        })
    </script>


    <!-- import data -->
    <script>
        let saveFile = () => {

            // Get the data from each element on the form.

            <!-- Elite Member Info -->
            const Name                                      = document.getElementById('Name').textContent;
            const DoB                                       = document.getElementById('DoB').value;
            const Gender                                    = document.getElementById('Gender').value;
            const City                                      = document.getElementById('City').value;

            <!-- PROBLEM -->
            <!-- Snack Preferences -->
            const cheeseLover                               = document.getElementById('cheeseLover').checked;
            const milkLover                                 = document.getElementById('milkLover').checked;

            <!-- PROBLEM -->
            <!-- Dinner Preferences -->
            const dinnerPizza                               = document.getElementById('dinnerPizza').checked;
            const dinnerSalad                               = document.getElementById('dinnerSalad').checked;
            const dinnerBird                                = document.getElementById('dinnerBird').checked;
            const dinnerSoup                                = document.getElementById('dinnerSoup').checked;

            // This variable stores all the data.
            let data =
                'Name|'                                     + Name + '\n' +
                'DoB|'                                      + DoB + '\n' +
                'Gender|'                                   + Gender + '\n' +
                'City|'                                     + City + '\n' +

                'cheeseLover|'                              + cheeseLover + '\n' +
                'milkLover|'                                + milkLover + '\n' +

                'dinnerPizza|'                              + dinnerPizza + '\n' +
                'dinnerSalad|'                              + dinnerSalad + '\n' +
                'dinnerBird|'                               + dinnerBird + '\n' +
                'dinnerSoup|'                               + dinnerSoup
                ;


            // Convert the text to BLOB.
            const textToBLOB = new Blob([data], { type: 'text/plain' });
            const sFileName = 'formData.txt';      // The file to save the data.

            let newLink = document.createElement("a");
            newLink.download = sFileName;

            if (window.webkitURL != null) {
                newLink.href = window.webkitURL.createObjectURL(textToBLOB);
            }
            else {
                newLink.href = window.URL.createObjectURL(textToBLOB);
                newLink.style.display = "none";
                document.body.appendChild(newLink);
            }
            newLink.click();
        }
    </script>

</body>
</html>


from HTML/JS form not importing/exporting txt correctly

got error in Type 'string' is not assignable to type '"allName" | `allName.${number}.nestedArray`' in react hook form with typescript

I am working on the react hook form with typescript. My data structure looks array within array. so I try to use the useFieldArray

allName: [
    {
      name: "useFieldArray1",
      nestedArray: [
        { name1: "field1", name2: "field2" },
        { name1: "field3", name2: "field4" }
      ]
    },
    {
      name: "useFieldArray2",
      nestedArray: [{ name1: "field1", name2: "field2" }]
    }
  ]

But when I try to set the name for the input like allName[${nestIndex}].nestedArray I got the below warning.

Type 'string' is not assignable to type '"allName" | `allName.${number}.nestedArray`'

Here I have attached the code sandbox link of my code. https://codesandbox.io/s/gallant-buck-iyqoc?file=/src/nestedFieldArray.tsx:504-537 How to fix this issue?



from got error in Type 'string' is not assignable to type '"allName" | `allName.${number}.nestedArray`' in react hook form with typescript

Disable Chrome's gzip automatic decompression

Recently, I have been dealing with a issue on Chrome that made it not possible to save correctly a compressed gzip file.

The root problem is described in this post: Downloaded Gzip seems to be currupted (Chrome)

As described in the link, the file is correctly downloaded in Firefox, because the blob that the AJAX response receives, is gzip encoded, and therefore correctly saved as .gz file. But when the blob data is received in Chrome, it is automatically decompressed, obtaining plain text (UTF-8 encoded) instead of the gzip encoding that we are looking after. This makes the saved file to be corrupted, because it is being saved a UTF-8 encoded blob in a file that is supposed to be gzipped.

After some research, I finally found which was the cause of the problem: Apparently when content-encoding: gzip header is specified in the server response, Chrome automatically decompresses the file, assuming that the main reason of using the gzip compression, is only due to saving bandwidth purposes. This problem is more thoroughly described in the following post: Chromium: prevent unpacking tar.gz

In that post gzip compression is enabled to wrap a .tar file, however in my case, there is not a file beneath the gzip compression ( I directly write data to the gzip file in the server side, using gzip python module). Thus, when Chrome decompress the gzip, there is only plain text. I have tried to explicitly send the response without specifying the content-encoding header, but it seems that Chrome automatically detects the encoding.

Is there anyway that I could disable Chromes's gzip automatic decompression?



from Disable Chrome's gzip automatic decompression

How can i copy pouchdb 0000003.log file to ionic 5 and retrieve data?

My scenario is to use pouch db data in ionic and i successfully added pouch db package to ionic and created a sample and it worked fine. now i have a scenario i have the below file enter image description here

000003.log in which i have all the data . But in ionic it is storing in the indexdb so how can i use this 000003.log data and copy it to indexdb or is there any way copy the contents ?

below is my app code

import { Injectable } from '@angular/core';
import PouchDB from 'pouchdb';

@Injectable({
  providedIn: 'root'
})
export class DataService {

  private database: any;
    private myNotes: any;

  constructor() {
        this.database = new PouchDB('my-notes');
    }

  public addNote(theNote: string): Promise<string> {
        const promise = this.database
            .put({
                _id: ('note:' + (new Date()).getTime()),
                note: theNote
            })
            .then((result): string => (result.id));

        return (promise);
    }

  getMyNotes() {
        return new Promise(resolve => {
            let _self = this;
            this.database.allDocs({
                include_docs: true,
                attachments: true
            }).then(function (result) {
                // handle result
                _self.myNotes = result.rows;
                console.log("Results: " + JSON.stringify(_self.myNotes));
                resolve(_self.myNotes);

            }).catch(function (err) {
                console.log(err);
            });
        });
    }

How to can export/Import the existing database in ionic app do i have to store in file system or indexdb ?



from How can i copy pouchdb 0000003.log file to ionic 5 and retrieve data?

Why pre/postfixes don't work and are shown as text on the webpage using Gulp4?

I'm trying to use libraries 'gulp-file-include' to include partials (header, footer) in my main html file. I'm also trying to use i18n using 'gulp-html-i18n'. Both partials and i18n seem working ("file-include" throws error when I'm trying to put the wrong path of file, or i18n creates lang directories). However when I try to wrap them into needed pre/postfixes, they are shown as plain text on the webpage.

Here is my gulpfile.js : Codeshare

Html:

<div>@@include('header.html')</div>

<div>$$</div>
</body>

Result:

Result:



from Why pre/postfixes don't work and are shown as text on the webpage using Gulp4?

Where to place form related JS function in React for my lambda contact form?

I am trying to use a lambda function to sent contact form submissions to an email. The site is built on React, the lambda was built following the guide they suggest Building a serverless contact form with AWS Lambda and AWS SES

However, the issue is that in the guide he uses vanilla JS instead of something more suitable for React and I can't figure out where to place the JS code or if I need to do something else to make it work.

The lambda function work and I can send emails using a curl command such as

curl --header "Content-Type: application/json" \
  --request POST \
  --data '{"email":"john.doe@email.com","name":"John Doe","content":"Hey!"}' \
  https://{id}.execute-api.{region}.amazonaws.com/{stage}/email/send

But it does not function on the form directly because the function is expecting a header "Content-Type: application/json" so I need to add this snippet of JS code, but I don't know where and the site is failing because of that.

const form = document.getElementById('contactForm')
const url = 'https://{id}.execute-api.{region}.amazonaws.com/{stage}/email/send'
const toast = document.getElementById('toast')
const submit = document.getElementById('submit')

function post(url, body, callback) {
  var req = new XMLHttpRequest();
  req.open("POST", url, true);
  req.setRequestHeader("Content-Type", "application/json");
  req.addEventListener("load", function () {
    if (req.status < 400) {
      callback(null, JSON.parse(req.responseText));
    } else {
      callback(new Error("Request failed: " + req.statusText));
    }
  });
  req.send(JSON.stringify(body));
}
function success () {
  toast.innerHTML = 'Thanks for sending me a message! I\'ll get in touch with you ASAP. :)'
  submit.disabled = false
  submit.blur()
  form.name.focus()
  form.name.value = ''
  form.email.value = ''
  form.content.value = ''
}
function error (err) {
  toast.innerHTML = 'There was an error with sending your message, hold up until I fix it. Thanks for waiting.'
  submit.disabled = false
  console.log(err)
}

form.addEventListener('submit', function (e) {
  e.preventDefault()
  toast.innerHTML = 'Sending'
  submit.disabled = true

  const payload = {
    name: form.name.value,
    email: form.email.value,
    content: form.content.value
  }
  post(url, payload, function (err, res) {
    if (err) { return error(err) }
    success()
  })
})
<form id="contactForm">
  <input type="text" name="name" required placeholder="Your name" />
  <input type="email" name="email" required placeholder="Your email address" />
  <input type="text" name="phone_number" required placeholder="Your phone number" />
  <textarea name="message" required placeholder="Write your message..." >
  </textarea>
  <button id="submit" type="submit">Send
    <i className="flaticon-tick"></i> 
  </button>
</form>

And the form is located inside a react functional component:

const ContactForm = () => {
    return (
        <form>
            <!-- Above form is here -->
        </form>
    )
}
export default ContactForm

I am trying to imbed it directly into the component but it seems like that is not working. Where should this JS code be within my React project?

I am not sure if I need to use something like a hook in React. (I am pretty now to react and don't really understand how all that works yet)

This is the repository:Deep-Blue

and the site is located at deep-blue.io

I also asked the Gatsby community and did not get an answer, I have been looking at many different tutorials like, this one but they all either use different technologies or fall short of explaining how to handle form submissions in ReactJS with GastbyJS



from Where to place form related JS function in React for my lambda contact form?

HTTP errors getting cached on Angular app on Android Chrome

We have an Angular app running with service worker (mainly for notification support).

Recently we've been getting complaints from a small number of users that the web app doesn't load for them on Chrome Android and they get 502 HTTP error. It seems to be caching that error, as reloading does not do anything.

But the error goes away when we ask them to clear the cookies and we aren't able to reproduce their error in our devices. All complaints are coming from Android only - not desktop and not iOS.

Angular 12 with @angular/PWA (https://www.fantrax.com/ngsw-worker.js)

The app is available at https://fantrax.com/



from HTTP errors getting cached on Angular app on Android Chrome

Access nested elements in HTMLRewriter - Cloudflare Workers

I have to access a nested element using HTMLRewriter in a Cloudflare worker.

Example

<div data-code="ABC">
   <div class="title">Title</div>
   <div class="price">9,99</div>
</div>
<div data-code="XYZ">
   <div class="title">Title</div>
</div>

I was thinking about use multiple .on() but the order is not preserved because some .price are missing and I cannot merge correctly results from codeHandler and a PriceHandler

await new HTMLRewriter().on("[data-code]", codeHandler)
                        .on(".price", priceHandler)
                        .transform(response).arrayBuffer()

I was thinking about iterating new HTMLRewriter() multiple times but the readable stream is locked.

Current code

Worker

class codeHandler {
    constructor() {
        this.values = []
    }

    element(element) {
        let data = {
            code: element.getAttribute("data-code"),
            title: element.querySelector(".title").innerText, <--
            price: element.querySelector(".price").innerText, <--- HERE
        }
        this.values.push( data )
    }
}


const url = "https://www.example.com"

async function handleRequest() {

  const response = await fetch(url)

   const codeHandler = new codeHandler()
   await new HTMLRewriter().on("[data-code]", codeHandler).transform(response).arrayBuffer()
    
    
   console.log(codeHandler.values)

    const json = JSON.stringify(codeHandler.values, null, 2)


    return new Response(json, {
        headers: {
        "content-type": "application/json;charset=UTF-8"
        }
    })  

}

addEventListener("fetch", event => {
  return event.respondWith(handleRequest())
})


from Access nested elements in HTMLRewriter - Cloudflare Workers

How can I set the file descriptors for a new Process in Haxe to use it with a socket?

I am translating some code to Haxe from Python so that I can target more platforms. But I'm having trouble with the following snippet.

import socket
from subprocess import Popen

host='127.0.0.1'
port=8080
file='handle.sh'

handler = socket.socket()
handler.bind((host, port))
handler.listen(5)


conn, address = handler.accept() # Wait for something to connect to the socket
proc = Popen(['bash', file], stdout=conn.makefile('wb'), stdin=conn.makefile('rb'))
proc.wait()
conn.shutdown(socket.SHUT_RDWR)
conn.close()

In Python, I can set stdin and stdout to the relevant file descriptors of the socket. But by the time I call shutdown, all the data to be sent is in the right buffer and nothing blocks me.

But I can't do this in Haxe as far as I can tell because input and output from the socket and, stdin and stdout from the process are all read-only.

I seem to get a deadlock with whatever I try. Currently I'm trying with a thread but it still gets stuck at reading from the socket.

#!/usr/bin/haxe --interp
import sys.net.Host;
import sys.net.Socket;
import sys.io.Process;
import sys.thread.Thread;

class HaxeServer {
    static function main() {
        var socket = new Socket();

        var fname = 'handle.sh';
        var host = '127.0.0.1';
        var port = 8080;

        socket.bind(new Host(host), port);
        socket.listen(5);
        while (true) {
            var conn = socket.accept();
            var proc = new Process('bash', [fname]);
            exchange(conn, proc);
            conn.output.write(proc.stdout.readAll());
            proc.close();
            conn.shutdown(true, true);
            conn.close();
        }
    }
    static function exchange(conn:Socket, proc:Process):Void {
        #if (target.threaded)
        Thread.create(() -> {
            while (true) {
                var drip = conn.input.readByte();
                proc.stdin.writeByte(drip);
            }
        });
        #end
    }
}


from How can I set the file descriptors for a new Process in Haxe to use it with a socket?

Monday, 28 June 2021

How to remove gap between fragmetns in ViewPager when applying a Cube Page Transformer

I want to achieve a cube animation effect when swiping ViewPager fragments. like this:

enter image description here

I'm using this code to achieve that:

class CubeOutTransformer : ViewPager2.PageTransformer {
    override fun transformPage(page: View, position: Float) {
        val deltaY = 0.5F

        page.pivotX = if (position < 0F) page.width.toFloat() else 0F
        page.pivotY = page.height * deltaY
        page.rotationY = 45F * position
    }
}

But my current effect is like this:

enter image description here

As you can see there's a huge gap between fragments when swiping and It's not like a cube animation which I'm looking for. How can I remove this gap? Thanks



from How to remove gap between fragmetns in ViewPager when applying a Cube Page Transformer

React multiple refs getting div height not accurate

I have a React setup which attempts to reduce re-renders to a minimum. And I'm trying to imperatively animate some big divs on scroll. For this I need to know the height of the child elements but I'm struggling to get an accurate measurement using getBoundingClientRect().height.

I've logged exactly where the inaccuracies happen and when the values are logged correctly in the full file repro here: https://codesandbox.io/s/stupefied-wilson-qzb1i?file=/src/components/CaseWrapper/index.js:756-836 (Reload a few times to get the discrepancy in the console)

enter image description here

Some notes

  • I know the child height calculation stems from elements loading at different speeds such as images causing the child height to be displayed as wrong
  • I tried using React.useCallback to get mutated values but couldnt get it work without causing rerenders
  • I tried getting the values using resizeObserver but the initial numbers were still wrong.

The setup is fairly straightforward:

It starts with some data:

let cases = new Map([
  [
    "home",
    {
      name: "Home",
      slug: "home",
      component: Home,
      bg: "#eee",
      color: "#111"
    }
  ],
  [...],
  [...]

The data is used to create refs

  const myRefs = React.useRef([]);
  myRefs.current = [...cases].map(
    (i) => myRefs.current[i] ?? React.createRef()
  );
  <CaseWrapper>
    {[...cases].map((v, i) => {
      return (
        <Case
          key={v[1].slug}
          data={v[1]}
          ref={myRefs.current[i]}
          index={i}
        ></Case>
      );
    })}
  </CaseWrapper>

Inside Case.js I'm using a forwardRef to be able to use the values in the parent

export const Case = React.forwardRef(({ index, data }, ref) => {

In my parent component CaseWrapper.js I'm getting the ref and putting on a getBoundingClientRect() which only renders once, and the wrong number.

const CaseWrapper = ({ children }) => {

  React.useEffect(() => {
    const childSum = children.reduce((acc, child) => {
      console.log(child.key, child.ref.current.getBoundingClientRect().height);
        ...

At the end of the day the goal is to have a performant way to measure the height of the children that are inside <CaseWrapper/> and ideally on browser resize too. So far I've found pretty much nothing that would solve this problem somewhat elegantly without relying on some 3rd party hooks.

Any help is much appreciated, Thanks!



from React multiple refs getting div height not accurate