Wednesday, 31 May 2023

Extract wildcard string (* asterisk only) from a set of strings/texts?

Is there an algorithm or code/lib (NodeJS preferably), to extract wildcard string from a set of strings or texts.

For example, following are the set of strings (forgive my english):

apple pie is not too so good
apple pie is not so good
apple pie is too good

I shall be able to extract a wildcard string with asterisk * only, like this:

apple pie is * good

There are other special characters in wildcard, but I want to use only * asterisk.

Please let me know if more information required.



from Extract wildcard string (* asterisk only) from a set of strings/texts?

Tuesday, 30 May 2023

Move Camera to the User-Clicked Location Using Tween for Three.js Panorama

My goal is to move Camera to the user-clicked location so the clicked location is centered for my equirectangular panorama. I am using tween.js to achieve this and the Camera does span but I cannot get the new coordinates centered correctly.

I tried to calculate the difference between the screen center and clicked location but it is not behaving right

   var handlePanoramaClick = function (event) {
      var center={x: window.innerWidth / 2, y: window.innerHeight / 2}  
      var click={x: event.clientX, y: event.clientY}

      var diffx = click.x - center.x;
      var diffy = click.y - center.y;

      var x = coords.latLon.current.x + diffy;
      var y = coords.latLon.current.y + diffx;
      
      var tween = new TWEEN.Tween(coords.latLon.current)
      .to({x:x,y:y}, 1000)
      .start();
   };

   var update = function update() {
     var distX = coords.latLon.current.x - coords.latLon.delta.x;
     var distY = coords.latLon.current.y - coords.latLon.delta.y;

     coords.latLon.delta.x += distX / 7;
     coords.latLon.delta.y += distY / 7;

     coords.phi = THREE.Math.degToRad(90 - coords.latLon.delta.x);
     coords.theta = THREE.Math.degToRad(coords.latLon.delta.y);
 
     coords.target.x = 500 * Math.sin(coords.phi) * Math.cos(coords.theta);
     coords.target.y = 500 * Math.cos(coords.phi);
     coords.target.z = 500 * Math.sin(coords.phi) * Math.sin(coords.theta);

     Cam.camera.lookAt(coords.target);
     renderer.render(Scene, Camera);
   };
 
   var animate = function animate() {
     update();
     requestAnimationFrame(animate);
     TWEEN.update();
   };


from Move Camera to the User-Clicked Location Using Tween for Three.js Panorama

Plotly colourbar legend not showing max or min value

I have a Plotly chart with a colourbar legend.

I'd like this to go from 0 to 100 with the ticks set every 10.

No matter how I try, the chart always starts the ticks at 10 and ends at 90, removing my top and bottom tick.

How can I ensure that the top and bottom tick are shown?

MWE:

import numpy as np
import plotly.graph_objects as go


x = np.linspace(0, 1, 50)
y = np.linspace(0, 1, 50)
xx, yy = np.meshgrid(x, y)

z = 100*xx*yy

figure = go.Figure()
figure.add_trace(
        go.Contour(
            z=z.flatten(),
            x=xx.flatten(),
            y=yy.flatten(),
            zmin=0,
            zmax=100,
            colorbar_tickvals=np.arange(0, 101, 10),
            colorbar_tickmode='array',
        )
)
figure.update_layout(
    template="simple_white",
)

figure.show()  

Contour map example



from Plotly colourbar legend not showing max or min value

How to make a scrollable bucket_list using shiny and sortable

Assuming following app from here.

How to

  1. fix the hight of the container aka "bucket-mother" aka rank_list_1 to a specific size (SOLVED using style i.e. max-height: 700px)
  2. and more importantly, how to make the list items scrollable?

Thus, a scrollbar has to be appear on the right side of the container if the list of items is too long. There is this Autoscroll function, but I have not figured out how to use it correctly: https://github.com/SortableJS/Sortable/tree/master/plugins/AutoScroll#options

Demo: https://jsbin.com/dosilir/edit?js,output

library(shiny)
library(sortable)


ui <- fluidPage(
  tags$head(
    tags$style(HTML(".bucket-list-container {min-height: 350px;}"))
  ),
  fluidRow(
    column(
      width = 12,
      #choose list of variable names to send to bucket list
      radioButtons(inputId="variableList",
                   label="Choose your variable list",
                   choices = c("names(mtcars)"="names(mtcars)","state.name"="state.name")),
      #input text to subset variable names
      textInput(
        inputId = "subsetChooseListText",
        label = "Input text to subset list of states to choose from",
        value = "c"
      ),
      div(
        # class value is current default class value for container
        class = "bucket-list-container default-sortable",
        "Drag the items in any desired bucket",
        div(
          # class value is current default class value for list
          class = "default-sortable bucket-list bucket-list-horizontal",
          # need to make sure the outer div size is respected
          # use the current default flex value
          uiOutput("selection_list", style="flex:1 0 200px;"),
          rank_list(
            text = "to here",
            labels = list(),
            input_id = "rank_list_2",
            options = sortable_options(group = "mygroup")
          ),
          rank_list(
            text = "and also here",
            labels = list(),
            input_id = "rank_list_3",
            options = sortable_options(group = "mygroup")
          )
        )
      )
    )
  ),
  fluidRow(
    column(
      width = 12,
      tags$b("Result"),
      column(
        width = 12,
        
        tags$p("input$rank_list_1"),
        verbatimTextOutput("results_1"),
        
        tags$p("input$rank_list_2"),
        verbatimTextOutput("results_2"),
        
        tags$p("input$rank_list_3"),
        verbatimTextOutput("results_3")
        
      )
    )
  )
)

server <- function(input,output) {
  
  #initialize reactive values
  varList <- reactive({
    req(input$variableList)
    if (input$variableList == "state.name") {
      state.name
    } else {
      paste0(rep(names(mtcars), 20),"_", 1:220)
    }
  })
  
  subsetChooseList <- reactive({
    items <- varList()
    pattern <- input$subsetChooseListText
    if (nchar(pattern) < 1) {
      return(items)
    }
    items[
      grepl(
        x = items,
        pattern = input$subsetChooseListText,
        ignore.case = TRUE
      )
    ]
  })
  
  output$selection_list <- renderUI({
    labels <- subsetChooseList()
    
    # remove already chosen items
    labels <- labels[!(
      labels %in% input$rank_list_2 |
        labels %in% input$rank_list_3
    )]
    rank_list(
      text = "Drag from here",
      labels = labels,
      input_id = "rank_list_1",
      options = sortable_options(group = "mygroup")
    )
  })
  
  #visual output for debugging
  output$results_1 <- renderPrint(input$rank_list_1)
  output$results_2 <- renderPrint(input$rank_list_2)
  output$results_3 <- renderPrint(input$rank_list_3)
  
}


shinyApp(ui, server)


from How to make a scrollable bucket_list using shiny and sortable

D3 animation morphing polygon shape per frame - chronological order

I want to create a d3.js visualization - that showcases different stages of a polygon shape - so having it animate/morph changing shape and scale.

Where the user click's on the little rectangles they essentially scroll through an array of data of "dimensions/coordinates" that showcase a "frame" of a shape (like a film frame)

so imagine - we have these shapes in an array --- a way of morphing/animating one to the other -- or/and using the rectangles to click and switch to that shape.

enter image description here enter image description here

So the data would be an array of shapes/positions

[
{"coordinateX": 20, "coordinateY": 30, "shape": [{"x":0.0, "y":25.0},
        {"x":8.5,"y":23.4}, {"x":13.0,"y":21.0}, {"x":19.0,"y":15.5}], "frame": 0, "date": 2023-05-23 10:00},
{"coordinateX": 19, "coordinateY": 29, "shape": [{"x":0.3, "y":23.0},
        {"x":8.5,"y":23.4}, {"x":33.0,"y":21.0}, {"x":19.0,"y":45.5}], "frame": 1, "date": 2023-05-24 11:00},
{"coordinateX": 19, "coordinateY": 30, "shape": [{"x":0.0, "y":25.0},
        {"x":5.5,"y":23.7}, {"x":11.0,"y":21.0}, {"x":12.0,"y":13.5}], "frame": 2, "date": 2023-05-25 11:00}
]

Dimensions and coordinates - scale and adjustment of shape per frame - chronological order

so morph the existing shape - into others..

here is some code samples to draw a static polygon

//example of drawing a polygon Proper format for drawing polygon data in D3 http://jsfiddle.net/4xXQT/

var vis = d3.select("body").append("svg")
         .attr("width", 1000)
         .attr("height", 667),

scaleX = d3.scale.linear()
        .domain([-30,30])
        .range([0,600]),

scaleY = d3.scale.linear()
        .domain([0,50])
        .range([500,0]),

poly = [{"x":0.0, "y":25.0},
        {"x":8.5,"y":23.4},
        {"x":13.0,"y":21.0},
        {"x":19.0,"y":15.5}];

vis.selectAll("polygon")
    .data([poly])
  .enter().append("polygon")
    .attr("points",function(d) { 
        return d.map(function(d) { return [scaleX(d.x),scaleY(d.y)].join(","); }).join(" ");})
    .attr("stroke","black")
    .attr("stroke-width",2);

with the data array -

http://jsfiddle.net/5k9z30vh/

I want to figure out a way of aniamting/morphing polygons.

how would I look at animating/morphing - through the polygons - https://www.youtube.com/watch?v=K1zHa1sAno0 Loop D3 animation of polygon vertex

var vis = d3.select("body").append("svg")
         .attr("width", 1000)
         .attr("height", 667),

scaleX = d3.scale.linear()
        .domain([-30,30])
        .range([0,600]),

scaleY = d3.scale.linear()
        .domain([0,50])
        .range([500,0]),

poly = [{"x":0.0, "y":25.0},
        {"x":8.5,"y":23.4},
        {"x":13.0,"y":21.0},
        {"x":19.0,"y":15.5}];
        
        var data = [{
        "coordinateX": 20,
        "coordinateY": 30,
        "shape": [{
                "x": 0.0,
                "y": 25.0
            },
            {
                "x": 8.5,
                "y": 23.4
            }, {
                "x": 13.0,
                "y": 21.0
            }, {
                "x": 19.0,
                "y": 15.5
            }
        ],
        "frame": 0,
        "date": "2023-05-23 10:00"
    },
    {
        "coordinateX": 19,
        "coordinateY": 29,
        "shape": [{
                "x": 0.3,
                "y": 23.0
            },
            {
                "x": 8.5,
                "y": 23.4
            }, {
                "x": 33.0,
                "y": 21.0
            }, {
                "x": 19.0,
                "y": 45.5
            }
        ],
        "frame": 1,
        "date": "2023-05-24 10:00"
    },
    {
        "coordinateX": 19,
        "coordinateY": 30,
        "shape": [{
                "x": 0.0,
                "y": 25.0
            },
            {
                "x": 5.5,
                "y": 23.7
            }, {
                "x": 11.0,
                "y": 21.0
            }, {
                "x": 12.0,
                "y": 13.5
            }
        ],
        "frame": 2,
        "date": "2023-05-25 10:00"
    }
]
        

vis.selectAll("polygon")
    .data([data[0].shape])
  .enter().append("polygon")
    .attr("points",function(d) { 
        return d.map(function(d) { return [scaleX(d.x),scaleY(d.y)].join(","); }).join(" ");})
    .attr("stroke","black")
    .attr("stroke-width",2);

hand drawing a polygon https://jsfiddle.net/7pwaLfth/

https://lucidar.me/en/d3.js/part-06-basic-shapes/ https://gist.github.com/RiseupDev/b07f7ccc1c499efc24e9

/// 30th May 2023 update

I have managed to create this demo that shows the concept working - the polygon morphs into the next -- this would be useful to showcase a "wound" healing from day 1 to day x -- essentially it would shrink into eventual non-existence..

I would like to fix/add these features/issues

-- there is a bug if you click on autoplay too much like event bubbling - if there is a way to avoid a malfunction if the user clicks too quickly

  • I'd like to add different speeds/fps options -- be able to watch a frame each second 1fps, or 2 frames a second 2fp ---- to try and make it more fluent watching a wound heal in timelapse almost --- fix the overall structure of the visualization -- create a sketlon - g placeholders -- fix the ability to scale the visualization cleanly
  • the polygon holder has no transformation to really align things properly -- the polygons are really big - I dont know how to just make them smaller - and if I reduce the canvas size then polygons become clipped --I dont know how to clean up the attachment of the little rectangle bar controller - it just seemed created/appended in a clunky manner

jsfiddle http://jsfiddle.net/5kqLftzy/

code

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <title>D3 Drawing</title>
  <script src="https://cdn.jsdelivr.net/d3js/3.5.9/d3.min.js"></script>
</head>
<body>
  <h3>Draw a polygon 2</h3>

    <div>
      <button onclick="beginAutoPlay()">Start Autoplay</button>
      <button onclick="stopAutoPlay()">Stop Autoplay</button>
    </div>  

  <script>


let polygons = [
  {
    "id": "1",
    "plot": "469,59.5625,246,256.5625,348,499.5625,527,482.5625,653,457.5625,693,406.5625,732,371.5625,743,260.5625,690,257.5625,650,172.5625,622,143.5625,589,98.5625,544,90.5625,511,55.5625"
  }, 
  {
    "id": "2",
    "plot": "462,79.5625,258,258.5625,361,470.5625,527,482.5625,653,457.5625,693,406.5625,718,363.5625,705,284.5625,684,241.5625,650,172.5625,622,143.5625,587,108.5625,544,90.5625,513,95.5625"
  },
  {
    "id": "3",
    "plot": "410,173.5625,324,259.5625,361,470.5625,527,435.5625,604,379.5625,672,374.5625,718,363.5625,692,281.5625,662,231.5625,650,172.5625,622,143.5625,582,132.5625,544,90.5625,513,95.5625"
  },
  {
    "id": "4",
    "plot": "422,186.5625,359,267.5625,374,430.5625,527,435.5625,604,379.5625,672,374.5625,691,325.5625,692,281.5625,662,231.5625,650,172.5625,622,143.5625,582,132.5625,543,111.5625,503,124.5625"
  },
  {
    "id": "5",
    "plot": "486,200.5625,359,267.5625,394,393.5625,527,435.5625,604,379.5625,617,332.5625,634,292.5625,624,252.5625,602,216.5625,606,192.5625,596,171.5625,585,148.5625,573,175.5625,525,175.5625"
  },
  {
    "id": "6",
    "plot": "486,200.5625,440,264.5625,419,370.5625,527,435.5625,604,379.5625,580,359.5625,589,330.5625,607,306.5625,610,277.5625,604,252.5625,588,246.5625,577,238.5625,549,220.5625,514,219.5625"
  },
  {
    "id": "7",
    "plot": "485,272.5625,469,294.5625,456,359.5625,491,396.5625,604,379.5625,580,359.5625,589,330.5625,607,306.5625,610,277.5625,604,252.5625,588,246.5625,571,265.5625,548,270.5625,521,277.5625"
  },
  {
    "id": "8",
    "plot": "499,297.5625,483,310.5625,488,333.5625,492,356.5625,534,366.5625,566,349.5625,589,330.5625,607,306.5625,610,277.5625,593,273.5625,578,275.5625,571,265.5625,548,270.5625,521,277.5625"
  }
]


let width = 1000;
let height = 600;


let polyMaxY = 300



var vis = d3.select("body").append("svg")
  .attr("width", width)
  .attr("height", height),

  scaleX = d3.scale.linear()
  .domain([-30, 30])
  .range([0, 600]),

  scaleY = d3.scale.linear()
  .domain([0, 50])
  .range([polyMaxY, 0])




let polyStage = vis.append("g")
  .attr("class", "polyStage")
  .attr("transform", "0 0")

let thePolygon = polyStage.append("polygon")
  .attr("id", "mainPoly")
  .attr("stroke", "black")
  .attr("stroke-width", 2)
  .attr("points", polygons[0])



let paddingBottom = 50



let chooserX = 100
let chooserY = polyMaxY + paddingBottom

let buttonWidth = 7
let buttonHeight = 50
let buttonRightPadding = 3

let selectedColor = "aqua"
let rectColor = "black"

let currentPoly = 0

let animationDuration = 2000
let goRightDelay = 1000
let autoplay = true;



let framePicker = vis.append("g")
  .attr("class", "framePicker")


  let buttonGroups = framePicker.selectAll("g")
    .data(polygons)
    .enter().append("g")
    .attr("transform", buttonGroupTranslate)

    buttonGroups.append("rect")
      .attr("id", (d, i) => "chooserRect" + i)
      .attr("width", buttonWidth)
      .attr("height", buttonHeight)
      .style("fill", rectColor)

    buttonGroups.on("click", navigateTo)

    // Listen to left and right arrow
    d3.select("body").on("keydown", function() {
      let keyCode = d3.event.keyCode
      d3.event.stopPropagation()
      if (keyCode == 37) { //  37 is left arrow
        goLeft()
      }
      if (keyCode == 39) {
        goRight()
      }
    })





    function goLeft() {
      let nextIndex = currentPoly - 1
      if (nextIndex < 0) {
        nextIndex = polygons.length - 1
      }
      navigateTo("", nextIndex)
    }

    function goRight() {
      let nextIndex = currentPoly + 1
      if (nextIndex >= polygons.length) {
        nextIndex = 0
      }
      navigateTo("", nextIndex)
    }


    function buttonGroupTranslate(d, i) {
      let groupX = chooserX + (i * (buttonWidth + buttonRightPadding))
      return "translate(" + groupX + "," + chooserY + ")"
    }

    function navigateTo(d, i) {
      buttonGroups.selectAll("rect").style("fill", rectColor) // reset all colors
      d3.select("#chooserRect" + i).style("fill", selectedColor) // set color for the current one
      d3.select("#mainPoly").transition()
        .duration(animationDuration)
        .attr("points", polygons[i]["plot"])
      currentPoly = i
    }

    let timerCallBack = function() {
      return function() {
        if (autoplay) {
          goRight()
          d3.timer(timerCallBack(), animationDuration + goRightDelay) // Set up a new timer
        }
        return true; // Cancel the current timer
      }
    }

    function beginAutoPlay() {
      autoplay = true
      d3.timer(timerCallBack(), animationDuration + goRightDelay)
    }

    function stopAutoPlay() {
      autoplay = false
    }



    navigateTo("", currentPoly)

  </script>
</body>
</html>


from D3 animation morphing polygon shape per frame - chronological order

Streaming of data not working in React client

I am currently working on a React application the expects a response stream. To simplify the scenario, let's say I have an endpoint that returns a number every second. I am using Flask for this purpose:

def stream():
    def generate():
        import math, time
        for i in range(10):
            yield
            time.sleep(1)

    return generate(), {"Content-Type": "text/plain"}

In my React component, a button click will trigger the following function:

const readStream = () => {
  const xhr = new XMLHttpRequest();
  xhr.seenBytes = 0;
  xhr.open('GET', '/stream', true);
  xhr.onreadystatechange = function(){
    if (this.readyState === 3){
      const data = xhr.response.substr(xhr.seenBytes)
      console.log(data)
      setNumber(data)  // set state value that will be displayed elsewhere
      xhr.seenBytes = xhr.responseText.length;
    }
  }
  xhr.send();
}

For some reason, the React code fails to get the chunks one at a time and it just dumps the data all at once. I've verified that using the same ajax request in vanilla JS works. I've tried different approach as well including using fetch and axios but none worked so far.

Though I am using GET here, my actual use case requires POST and I have to get my data from that response.

Is there anything I am missing out?



from Streaming of data not working in React client

How to send logs from a python application to Datadog using ddtrace?

Let's say that I have a python routine that runs periodically using cron. Now, let's say that I want to send logs from it to Datadog. I thought that the simples way to do it would be via Datadog's agent, e.g. using ddtrace...

import ddtrace

ddtrace.patch_all()

import logging

logger = logging.getLogger(__name__)
logger.warning("Dummy log")

...but this is not working. I've tried with both DD_LOGS_INJECTION=true and DD_LOGS_ENABLED=true but looking at the docs it seems that I have to configure something so the Agent will tail the log files. However, by looking at type: file I'd guess that I could send logs without having to worry with creating those configuration files.

What would you say is the simplest way to send logs do Datadog and how to do that from a python application?



from How to send logs from a python application to Datadog using ddtrace?

Monday, 29 May 2023

Implementing a waiting mechanism in an application similar to Omegle using LSH algorithm and Minhash

I'm developing an application similar to Omegle, where users are matched with strangers based on their common interests. To achieve this, I'm combining the LSH (Locality Sensitive Hashing) algorithm with the Minhash technique. However, I'm facing difficulties in implementing a waiting mechanism for users who don't immediately find a matching pair when they call the API.

Currently, I'm using the sleep function to introduce a waiting period before returning the status "Failed". However, it seems that the sleep function is blocking other API calls and causing delays for other users. I'm curious to know how websites like Omegle handle this scenario and what would be the correct procedure to implement an efficient waiting mechanism.

Here's code snippet:

from fastapi import FastAPI, Body
from typing import Annotated
from pydantic import BaseModel
from sonyflake import SonyFlake
import redis
import time
from datasketch import MinHash, MinHashLSH

app = FastAPI()
sf = SonyFlake()
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
lsh = MinHashLSH(num_perm=128, threshold=0.5, storage_config={
    'type': 'redis',
    'redis': {'host': '127.0.0.1', 'port': 6379}
}, prepickle=True)


class Partner(BaseModel):
    client_id: int
    partner_id: str
    status: str = 'Failed'


@app.post("/start", response_model=Partner)
async def start(interests: Annotated[list[str] | None, Body()] = None) -> Partner:
    client_id=sf.next_id()
    partner_id = ''

    minhash = MinHash()
    if not interests:
        return Partner(client_id = client_id, partner_id = partner_id)

    client_hash = f"user:{client_id}:interests:hash"
    minhash.update_batch([*(map(lambda item: item.encode('utf-8'), interests))])
    lsh.insert(client_hash, minhash)

    matches = lsh.query(minhash)
    matches.remove(client_hash)

    if not matches:
        time.sleep(5)

    matches = lsh.query(minhash)
    matches.remove(client_hash)

    if not matches:
        lsh.remove(client_hash)
        return Partner(client_id = client_id, partner_id = partner_id)

    lsh.remove(client_hash)
    lsh.remove(matches[0])
    return Partner(client_id = client_id, partner_id = matches[0], status="Success")

I would appreciate any insights or suggestions on how to properly implement the waiting mechanism, ensuring that it doesn't negatively impact the performance and responsiveness of the application. Is there a recommended approach or best practice to achieve this functionality while maintaining the responsiveness of the API for other users?

  • Please share any insights or best practices on implementing an efficient waiting mechanism in this scenario.
  • Any suggestions on optimizing the code or improving its responsiveness would be greatly appreciated.
  • Please provide resources or links to read more about it.

Thank you.



from Implementing a waiting mechanism in an application similar to Omegle using LSH algorithm and Minhash

TensorFlowJS Mask RCNN - ERROR provided in model.execute(dict) must be int32, but was float32

I have trained a object detection model using transferred learning from Mask R-CNN Inception ResNet V2 1024x1024 and after converting the model to js I get the error: ERROR provided in model.execute(dict) must be int32, but was float32. Here are the steps I took to create the model.

1- Created the training.json, validation.json, testing.json annotation files along with the label_map.txt files from my images. I have also pre-processed the images to fit the 1024 * 1024 size.

2- Used the create_coco_tf_record.py provided by tensorflow to generate tfrecord files. The only alteration I made to the create_coco_tf_record.py file was changing include_mask to True

tf.flags.DEFINE_boolean(
    'include_masks', True **was false**, 'Whether to include instance segmentations masks '

then ran the bottom command using conda

python create_coco_tf_record.py ^
--logtostderr ^
--train_image_dir=C:/model/ai_container/training ^
--val_image_dir=C:/model/ai_container/vidation ^
--test_image_dir=C:/model/ai_container/testing ^
--train_annotations_file=C:/model/ai_container/training/training.json ^
--val_annotations_file=C:/model/ai_container/validation/coco_validation.json ^
--testdev_annotations_file=C:/model/ai_container/testing/coco_testing.json ^
--output_dir=C:/model/ai_container/tfrecord

3- I then train the model. Bellow is the modified portion of my config_file based on base mask-rcnn config file. The batch and num_steps are set to 1 just so I could quickly train the model to test the results.

    train_config: {
  batch_size: 1
  num_steps: 1
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        cosine_decay_learning_rate {
          learning_rate_base: 0.008
          total_steps: 200000
          warmup_learning_rate: 0.0
          warmup_steps: 5000
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint_version: V2
  fine_tune_checkpoint: "C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/checkpoint/ckpt-0"
  fine_tune_checkpoint_type: "detection"
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
}

train_input_reader: {
  label_map_path: "C:/model/ai_container/label_map.txt"
  tf_record_input_reader {
    input_path: "C:/model/ai_container/tfrecord/coco_train.record*"
  }
  load_instance_masks: true
  mask_type: PNG_MASKS
}

eval_config: {
  metrics_set: "coco_detection_metrics"
  metrics_set: "coco_mask_metrics"
  eval_instance_masks: true
  use_moving_averages: false
  batch_size: 1
  include_metrics_per_category: false
}

eval_input_reader: {
  label_map_path: "C:/model/ai_container/label_map.txt"
  shuffle: false
  num_epochs: 1
  tf_record_input_reader {
    input_path: "C:/model/ai_container/tfrecord/coco_val.record*"
  }
  load_instance_masks: true
  mask_type: PNG_MASKS
}

than ran training command:

python object_detection/model_main_tf2.py ^
--pipeline_config_path=C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config ^
--model_dir=C:/TensoFlow/training_process_2 ^
--alsologtostderr

4- Run validation command (might be doing this wrong)

python object_detection/model_main_tf2.py ^
--pipeline_config_path=C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config ^
--model_dir=C:/TensoFlow/training_process_2 ^
--checkpoint_dir=C:/TensoFlow/training_process_2 ^
--sample_1_of_n_eval_examples=1 ^
--alsologtostderr

5- Export Model

python object_detection/exporter_main_v2.py ^
--input_type="image_tensor" ^
--pipeline_config_path=C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config ^
--trained_checkpoint_dir=C:/TensoFlow/training_process_2 ^
--output_directory=C:/TensoFlow/training_process_2/generatedModel

6- Convert Model to tensorflowJs

tensorflowjs_converter ^
--input_format=tf_saved_model ^
--output_format=tfjs_graph_model  ^
--signature_name=serving_default  ^
--saved_model_tags=serve ^
C:/TensoFlow/training_process_2/generatedModel/saved_model C:/TensoFlow/training_process_2/generatedModel/jsmodel

7- Then attempt to load the model into my angular project. I placed the converted model bin and json files in my assets folder.

npm install @tensorflow/tfjs 

ngAfterViewInit() {
   tf.loadGraphModel('/assets/tfmodel/model1/model.json').then((model) => {
     this.model = model;
     this.model.executeAsync(tf.zeros([1, 256, 256, 3])).then((result) => {
       this.loadeModel = true;
     });
   });
}

I then get the error

    tf.min.js:17 ERROR Error: Uncaught (in promise): Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
    Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
    at F$ (util_base.js:153:11)
    at graph_executor.js:721:9
    at Array.forEach (<anonymous>)
    at e.value (graph_executor.js:705:25)
    at e.<anonymous> (graph_executor.js:467:12)
    at h (tf.min.js:17:2100)
    at Generator.<anonymous> (tf.min.js:17:3441)
    at Generator.next (tf.min.js:17:2463)
    at u (tf.min.js:17:8324)
    at o (tf.min.js:17:8527)
    at resolvePromise (zone.js:1211:31)
    at resolvePromise (zone.js:1165:17)
    at zone.js:1278:17
    at _ZoneDelegate.invokeTask (zone.js:406:31)
    at Object.onInvokeTask (core.mjs:26343:33)
    at _ZoneDelegate.invokeTask (zone.js:405:60)
    at Zone.runTask (zone.js:178:47)
    at drainMicroTaskQueue (zone.js:585:35)

Im using angular. I have also tried a few online solutions with no success. If anyone could give me any information on how to possible solve this issue I would be grateful. THANKS.



from TensorFlowJS Mask RCNN - ERROR provided in model.execute(dict) must be int32, but was float32

How can I update LinkedIn Basic profile in Python

I am trying to update LinkedIn profile using this Python code :-

import requests
access_token = "xxx"
profile_id = "me"  # "me" refers to the currently authenticated user's profile
new_headline = "New Headline Text"
new_summary = "New Summary Text"


def update_profile():
    
    endpoint_url = f"https://api.linkedin.com/v2/me"

    headers = {
        "Authorization": f"Bearer {access_token}",
        "Content-Type": "application/json"
    }
    payload = {
        "headline": new_headline,
        "summary": new_summary
    }
    response = requests.patch(endpoint_url, headers=headers, json=payload)
    if response.status_code == 200:
        print("Profile updated successfully.")
    else:
        print("Error updating profile.")
        print(response.text)



if __name__ == '__main__':
    update_profile()

The authorisations I have are :-

enter image description here

But I get this error :-

"message": "java.lang.IllegalArgumentException: No enum constant com.linkedin.restli.common.HttpMethod.PATCH",

How to fix this error ?

This is my Python environment

enter image description here



from How can I update LinkedIn Basic profile in Python

Expo Firebase Auth Persistence Not Working As Expected

I have a firebaseConfig.js that looks like this:

import { initializeApp } from "firebase/app";
import { initializeAuth } from "firebase/auth";
import { getReactNativePersistence } from "firebase/auth/react-native";
import { AsyncStorage } from "@react-native-async-storage/async-storage";
import { getFirestore } from "firebase/firestore";

const firebaseConfig = {...};

export const app = initializeApp(firebaseConfig);

const authState = initializeAuth(app, {
  persistence: getReactNativePersistence(AsyncStorage)
});

export const auth = authState;

export const db = getFirestore(app);

Then, when I sign in the user, it looks like this:

import { auth } from "../../firebaseConfig";
...
 signInWithEmailAndPassword(auth, email.trim(), password)
  .then(() => {
     // Handle success.
  })

Then in App.js, I have this:

import { onAuthStateChanged } from "firebase/auth";
import { auth } from "./firebaseConfig";

... 

 onAuthStateChanged(auth, user => {
  if (user) {
    ...
  } else {
    console.log("No user");
  }
});

Each time I refresh the app, I'm getting no user returned from onAuthStateChanged. It successfully detects the user after logging in or registering, but not after refreshing the app.

What am I doing wrong?



from Expo Firebase Auth Persistence Not Working As Expected

How to return streams from node js with openai

I am trying to set up a node/react setup that streams results from openai. I found an example project that does this but it is using next.js. I am successfully making the call and the results are returning as they should, however, the issue is how to return the stream to the client. Here is the code that works in next.js

import {
  GetServerSidePropsContext,
} from 'next';
import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const';
import { OpenAIError, OpenAIStream } from '@/utils/server';
import { ChatBody, Message } from '@/types/chat';
// @ts-expect-error
import wasm from '../../node_modules/@dqbd/tiktoken/lite/tiktoken_bg.wasm?module';
import tiktokenModel from '@dqbd/tiktoken/encoders/cl100k_base.json';
import { Tiktoken, init } from '@dqbd/tiktoken/lite/init';

const handler = async (
  req: GetServerSidePropsContext['req'],
  res: GetServerSidePropsContext['res'],
): Promise<Response> => {
  try {
    const { model, messages, key, prompt, temperature } = (await (
      req as unknown as Request
    ).json()) as ChatBody;
    -(
      (await init((imports) => WebAssembly.instantiate(wasm, imports)))
    );
    console.log({ model, messages, key, prompt, temperature })
    const encoding = new Tiktoken(
      tiktokenModel.bpe_ranks,
      tiktokenModel.special_tokens,
      tiktokenModel.pat_str,
    );

    let promptToSend = prompt;
    if (!promptToSend) {
      promptToSend = DEFAULT_SYSTEM_PROMPT;
    }

    let temperatureToUse = temperature;
    if (temperatureToUse == null) {
      temperatureToUse = DEFAULT_TEMPERATURE;
    }

    const prompt_tokens = encoding.encode(promptToSend);

    let tokenCount = prompt_tokens.length;
    let messagesToSend: Message[] = [];

    for (let i = messages.length - 1; i >= 0; i--) {
      const message = messages[i];
      const tokens = encoding.encode(message.content);

      if (tokenCount + tokens.length + 1000 > model.tokenLimit) {
        break;
      }
      tokenCount += tokens.length;
      messagesToSend = [message, ...messagesToSend];
    }

    encoding.free();

    const stream = await OpenAIStream(
      model,
      promptToSend,
      temperatureToUse,
      key,
      messagesToSend,
    );

    return new Response(stream);
  } catch (error) {
    console.error(error);
    if (error instanceof OpenAIError) {
      return new Response('Error', { status: 500, statusText: error.message });
    } else {
      return new Response('Error', { status: 500 });
    }
  }
};

export default handler;

OpenAIStream.ts

 const res = await fetch(url, {...});

  const encoder = new TextEncoder();
  const decoder = new TextDecoder();

  const stream = new ReadableStream({
    async start(controller) {
      const onParse = (event: ParsedEvent | ReconnectInterval) => {
        if (event.type === 'event') {
          const data = event.data;

          try {
            const json = JSON.parse(data);
            if (json.choices[0].finish_reason != null) {
              controller.close();
              return;
            }
            const text = json.choices[0].delta.content;
            const queue = encoder.encode(text);
            controller.enqueue(queue);
          } catch (e) {
            controller.error(e);
          }
        }
      };

      const parser = createParser(onParse);

      for await (const chunk of res.body as any) {
        parser.feed(decoder.decode(chunk));
      }
    },
  });

  return stream;

When trying to this up in node the first issue I ran into is "ReadableStream" is undefined. I solved it using a polyfill

import { ReadableStream } from 'web-streams-polyfill/ponyfill/es2018';

When I log

const text = json.choices[0].delta.content;

It shows that the multiple responses from the API are being returned correctly.

Instead of using returning the data using new Response I am using:

import { toJSON } from 'flatted';

export const fetchChatOpenAI = async (
  req: AuthenticatedRequest,
  res: Response
) => {
  try {
    const stream = await OpenAIStream(
      model,
      promptToSend,
      temperatureToUse,
      key,
      messagesToSend
    );

    res.status(200).send(toJSON(stream));
  } catch (error) {
    if (error instanceof OpenAIError) {
      console.error(error);
      res.status(500).json({ statusText: error.message });
    } else {
      res.status(500).json({ statusText: 'ERROR' });
    }
  }
};

In the client here is how the response is being handled.

 const controller = new AbortController();
    const response = await fetch(endpoint, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      signal: controller.signal,
      body: JSON.stringify(chatBody),
    });
    if (!response.ok) {
      console.log(response.statusText);
    } else {
      const data = response.body;

      if (data) {
        const reader = data.getReader();
        const decoder = new TextDecoder();
        let done = false;
        let text = '';
        while (!done) {
          const { value, done: doneReading } = await reader.read();
          console.log(value);
          done = doneReading;
          const chunkValue = decoder.decode(value);
          console.log(chunkValue);
          text += chunkValue;
        }
      }
    }

When running the next.js project here is a sample output from those logs. WORKING_DEMO

In my Node version here is a screenshot of what those logs look like

NOT_WORKING_APP



from How to return streams from node js with openai

Sunday, 28 May 2023

How to push items inside an array at the rate at which the screen is being getting captured

I am using getUserDisplay api to record the current tab of the user, Now I want to capture the users mouse move positions at the rate it is capturing the screen in (so that each frame could have one object of x and y coordinates)


const constraints = {
  audio: false,
  video: true,
  videoConstraints:{
    mandatory:
     {minFrameRate:60,maxFrameRate:90,maxWidth:1920,maxHeight:1080}} 
};

navigator.mediaDevices.getUserDisplay(constraints)
  .then(stream => {

    const mediaStream = stream; // 30 frames per second
    const mediaRecorder = new MediaRecorder(mediaStream, {
    mimeType: 'video/webm',
    videoBitsPerSecond: 3000000
    });
   

in short > I recorded the screen and send the recorded video to the backend, in the backend decoded the frames, now for each frame there should the mouse x and y coordinates just like doing in real, then I would stich the frames together and form a video wants to do some editing with the recording

I don't wanna add the cursor on the video in the frontend js, but rather save the mouse coordiates and recorded video seperate and send both to backend

I try using requestAnimationFrame, but its not equal to the number of frames in the video, I tested and number of frames in the recorded video were like 570 and the array only contains 194 items.

 function tests() {
      testf.push('test');
      window.requestAnimationFrame(tests)
    }
    

Thank you so much for reading, any advice would be greatly appreciated :)



from How to push items inside an array at the rate at which the screen is being getting captured

PyAudio distorted recording when while loop too busy

I have a Python script that monitors an audio stream in real time and uses a moving average to determine when there is audio and set start and stop points based on when it is above or below a given threshold. Because the script runs 24/7, I avoid excessive memory usage by removing part of the audio stream after about 4 hours of audio.

def record(Jn):
  global current_levels
  global current_lens
  device_name = j_lookup[Jn]['device']
  device_index = get_index_by_name(device_name)
  audio = pyaudio.PyAudio()
  stream = audio.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, input_device_index=device_index, frames_per_buffer=CHUNK)

  recorded_frames = []
  quantized_history = []
  long_window = int(LONG_MOV_AVG_SECS*RATE/CHUNK) # converting seconds to while loop counter

  avg_counter_to_activate_long_threshold = LONG_THRESH*long_window
  safety_window = 1.5*avg_counter_to_activate_long_threshold
  long_thresh_met = 0

  long_start_selection = 0
  while True:
    data = stream.read(CHUNK, exception_on_overflow=False)
    recorded_frames.append(data)
    frame_data = struct.unpack(str(CHUNK) + 'h', data)
    frame_data = np.array(frame_data)
    sum_abs_frame = np.sum(np.abs(frame_data))
    quantized_history.append(0 if sum_abs_frame < j_lookup[Jn]['NOISE_FLOOR'] else 1)
    current_levels[Jn] = sum_abs_frame

    counter = len(recorded_frames)
    current_lens[Jn] = counter
    if counter >= long_window:
      long_movavg = sum(quantized_history[counter-long_window:counter])/long_window
      if long_movavg >= LONG_THRESH and long_thresh_met != 1:
        long_start_selection = int(max(counter - safety_window, 0))
        long_thresh_met = 1
      if long_movavg < LONG_THRESH and long_thresh_met == 1:
        long_end = int(counter)
        long_thresh_met = 2
        save_to_disk(recorded_frames[long_start_selection:long_end], audio, Jn)

    if counter > MAX_LOOKBACK_PERIOD: # don't keep endless audio history to avoid excessive memory usage
      del recorded_frames[0]
      del quantized_history[0]
      long_start_selection = max(0, long_start_selection - 1) # since you deleted first element, the recording start index is now one less

What I have above works, but what I noticed is that once I hit the four hour mark (the if counter > MAX_LOOKBACK_PERIOD statement at the very end becomes true), any audio saved after that point starts to sound distorted. For example, before the four hour point, the audio looks like:

enter image description here

after the four hour mark, it looks like:

enter image description here

You can see the distortion appearing as these vertical spikes on the spectrogram. I assume the del function is just taking so long that the while loop can't keep up with the audio stream and this is somehow causing the distortion, but I'm not sure. It has to be related to del somehow because the distortion only appears once the if counter > MAX_LOOKBACK_PERIOD becomes true.

Any idea how to address this?



from PyAudio distorted recording when while loop too busy

Saturday, 27 May 2023

Meme Generator: Canvas is not defined

I'm trying to make a meme generator with Javascript.

This meme generator basically gets three inputs from the user and adds the text into the image.

This is my script.js file.

Now it works fine and perfectly and adds the text into the image.

But I also need to make those added texts draggable so the user can drag them around with his mouse.

Now in order to do that, I changed the script to this one.

But this one does not load the image and shows the message Uncaught TypeError: canvas is undefined at Console.

So if you know what's going wrong here, please let me know...

I would really appreciate any idea or suggestion from you guys.

This is also the html part:

<div class="box">
  <div>
    <div id="canvasWrapper">
    </div>
  </div>
  
  
  
  <div>
    <h3><i class="fa fa-picture-o fa-fw" aria-hidden="true"></i>Source Image</h3>
    <div class="box">
      <div>
        <p>From URL</p>
        <input id="imgURL" class="block" type="text" placeholder="Link to image" />
        <a href="http://memeful.com/" target="_blank">Memeful.com</a>
      </div>
      <div>
        <p>From Local Disk</p>
        <input id="imgFile" type="file" accept="image/*"/>
        <label for="imgFile" class="btn"><i class="fa fa-upload fa-fw"></i></label>
      </div>
    </div>


    
    <h3><i class="fa fa-commenting-o fa-fw" aria-hidden="true"></i>Meme Text</h3>
    <div class="box">
      <div>
        <p>Top Text</p>
        <input id="textTop" type="text" class="block" placeholder="Top text" />
      </div>
      <div>
        <p>Bottom Text</p>
        <input id="textBottom" type="text" class="block" placeholder="Bottom text" />
      </div>
    </div>
    <div class="box">
      <div>
        <p>Middle Text</p>
        <input id="textMiddle" type="text" class="block" placeholder="Middle text" />
      </div>
      <div>
        <p>Middle Text Size: <span id="textSizeMiddleOut">10</span></p>
        <input id="textSizeMiddle" type="range" min="1" max="100" value="10" class="slider" />
        </div>
    </div>
    
    
    <h3><i class="fa fa-text-height fa-fw" aria-hidden="true"></i>Text Size</h3>
    <div class="box">
      <div>
        <p>Top Text: <span id="textSizeTopOut">10</span></p>
        <input id="textSizeTop" type="range" min="2" max="50" step="2" />
      </div>
      <div>
        <p>Bottom Text: <span id="textSizeBottomOut">10</span></p>
        <input id="textSizeBottom" type="range" min="2" max="50" step="2" />
      </div>
    </div>

    
    
    <div class="box">
      <div>
        <h3><i class="fa fa-eye fa-fw" aria-hidden="true"></i>Preview Size</h3>
        <input id="trueSize" type="checkbox"/>
        <label for="trueSize"><span>Show true size</span></label>
      </div>
      
      
      
      <div>
        <h3><i class="fa fa-download fa-fw" aria-hidden="true"></i>Export</h3>
        <p>If the button doesn't work, right-click the image and save it</p>
        <p>If you are one mobile, download the source image and directly upload it</p>
        <button id="export">Export!</button>
      </div>

    </div>
  </div>
</div>


from Meme Generator: Canvas is not defined

electron webpack Uncaught ReferenceError: require is not defined "querystring"

I'm trying to run an old electron app, but I can't figure out which node version to use, nor which part of the config/dependencies to update.

I added electron webPreferences in the windows, but it didn't help.

    webPreferences: {
      nodeIntegration: true,
      nodeIntegrationInWorker: true,
      contextIsolation: false,
      enableRemoteModule: true,
    },

Does someone have any idea of what to check first ? Should I try to fix warnings in the current state of the app or should I update dependencies and then fix major breaks ?

yarn dev

yarn run v1.22.19
$ webpack-dev-server --hot --watch --config webpack.config.dev.js
i 「wds」: Project is running at http://localhost:8080/
i 「wds」: webpack output is served from http://localhost:8080/
i 「atl」: Using typescript@3.4.4 from typescript
i 「atl」: Using tsconfig.json from ./tsconfig.json
i 「atl」: Using typescript@3.4.4 from typescript
i 「atl」: Using tsconfig.json from ./tsconfig.json
i 「atl」: Checking started in a separate process...
i 「atl」: Time: 3514ms
i 「atl」: Checking started in a separate process...
i 「atl」: Time: 2355ms
‼ 「wdm」: Hash: 4697290ef706a2a99ad5e1b5fab076995682e95f
Version: webpack 4.30.0
Child
    Hash: 4697290ef706a2a99ad5
    Time: 17242ms
    Built at: 2023-05-24 15:55:15
        Asset      Size  Chunks             Chunk Names
    bundle.js  6.53 MiB    main  [emitted]  main
    Entrypoint main = bundle.js
    [0] multi (webpack)-dev-server/client?http://localhost:8080 (webpack)/hot/dev-server.js ./src/index.tsx 52 bytes {main} [built]
    [./node_modules/loglevel/lib/loglevel.js] 7.68 KiB {main} [built]
    [./node_modules/mousetrap/mousetrap.js] 33.1 KiB {main} [built]
    [./node_modules/react-dom/index.js] 1.33 KiB {main} [built]
    [./node_modules/react-redux/es/index.js] 416 bytes {main} [built]
    [./node_modules/react/index.js] 190 bytes {main} [built]
    [./node_modules/strip-ansi/index.js] 161 bytes {main} [built]
    [./node_modules/webpack-dev-server/client/index.js?http://localhost:8080] (webpack)-dev-server/client?http://localhost:8080 8.26 KiB {main} [built]
    [./node_modules/webpack-dev-server/client/overlay.js] (webpack)-dev-server/client/overlay.js 3.59 KiB {main} [built]
    [./node_modules/webpack-dev-server/client/socket.js] (webpack)-dev-server/client/socket.js 1.05 KiB {main} [built]
    [./node_modules/webpack/hot sync ^\.\/log$] (webpack)/hot sync nonrecursive ^\.\/log$ 170 bytes {main} [built]
    [./node_modules/webpack/hot/dev-server.js] (webpack)/hot/dev-server.js 1.61 KiB {main} [built]
    [./node_modules/webpack/hot/emitter.js] (webpack)/hot/emitter.js 75 bytes {main} [built]
    [./node_modules/webpack/hot/log-apply-result.js] (webpack)/hot/log-apply-result.js 1.27 KiB {main} [built]
    [./src/index.tsx] 7.99 KiB {main} [built]
        + 695 hidden modules

    WARNING in ./node_modules/fluent-ffmpeg/lib/options/misc.js 27:21-40
    Critical dependency: the request of a dependency is an expression
     @ ./node_modules/fluent-ffmpeg/lib/fluent-ffmpeg.js
     @ ./node_modules/fluent-ffmpeg/index.js
     @ ./src/common/Util.js
     @ ./src/views/Duplicates.tsx
     @ ./src/index.tsx
Child
    Hash: e1b5fab076995682e95f
    Time: 13857ms
    Built at: 2023-05-24 15:55:11
            Asset      Size  Chunks             Chunk Names
    background.js  4.33 MiB    main  [emitted]  main
    Entrypoint main = background.js
    [0] multi (webpack)-dev-server/client?http://localhost:8080 (webpack)/hot/dev-server.js ./background.ts 52 bytes {main} [built]
    [./background.ts] 1010 bytes {main} [built]
    [./node_modules/loglevel/lib/loglevel.js] 7.68 KiB {main} [built]
    [./node_modules/strip-ansi/index.js] 161 bytes {main} [built]
    [./node_modules/webpack-dev-server/client/index.js?http://localhost:8080] (webpack)-dev-server/client?http://localhost:8080 8.26 KiB {main} [built]
    [./node_modules/webpack-dev-server/client/overlay.js] (webpack)-dev-server/client/overlay.js 3.59 KiB {main} [built]
    [./node_modules/webpack-dev-server/client/socket.js] (webpack)-dev-server/client/socket.js 1.05 KiB {main} [built]
    [./node_modules/webpack/hot sync ^\.\/log$] (webpack)/hot sync nonrecursive ^\.\/log$ 170 bytes {main} [built]
    [./node_modules/webpack/hot/dev-server.js] (webpack)/hot/dev-server.js 1.61 KiB {main} [built]
    [./node_modules/webpack/hot/emitter.js] (webpack)/hot/emitter.js 75 bytes {main} [built]
    [./node_modules/webpack/hot/log-apply-result.js] (webpack)/hot/log-apply-result.js 1.27 KiB {main} [built]
    [./node_modules/webpack/hot/log.js] (webpack)/hot/log.js 1.11 KiB {main} [built]
    [./src/background/BackgroundWindow.ts] 10.1 KiB {main} [built]
    [./src/main/Logger.js] 1010 bytes {main} [built]
    [querystring] external "querystring" 42 bytes {main} [built]
        + 586 hidden modules

    WARNING in ./node_modules/fluent-ffmpeg/lib/options/misc.js 27:21-40
    Critical dependency: the request of a dependency is an expression
     @ ./node_modules/fluent-ffmpeg/lib/fluent-ffmpeg.js
     @ ./node_modules/fluent-ffmpeg/index.js
     @ ./src/background/ScreenshotEngine.ts
     @ ./src/background/BackgroundWindow.ts
     @ ./background.ts

    WARNING in ./node_modules/chokidar/lib/fsevents-handler.js
    Module not found: Error: Can't resolve 'fsevents' in '.\node_modules\chokidar\lib'
     @ ./node_modules/chokidar/lib/fsevents-handler.js
     @ ./node_modules/chokidar/index.js
     @ ./src/background/Watcher.ts
     @ ./src/background/BackgroundWindow.ts
     @ ./background.ts
i 「wdm」: Compiled with warnings.

Console :

Uncaught ReferenceError: require is not defined
    at eval (external_"querystring":1:18)
    at Object.querystring (bundle.js:8870:1)
    at __webpack_require__ (bundle.js:703:30)
    at fn (bundle.js:77:20)
    at Object.eval (webpack:///(:8080/webpack)-dev-server/client?:6:19)
    at eval (webpack:///(:8080/webpack)-dev-server/client?:299:30)
    at ./node_modules/webpack-dev-server/client/index.js?http://localhost:8080 (bundle.js:7798:1)
    at __webpack_require__ (bundle.js:703:30)
    at fn (bundle.js:77:20)
    at eval (webpack:///multi_(:8080/webpack)-dev-server/client?:1:1)

yarn start

yarn run v1.22.19
$ cross-env NODE_ENV=development electron .

process.env.NODE_ENV: development
Done in 72.06s.

Console:

bundle.js:1 Failed to load resource: net::ERR_CONNECTION_REFUSED
.\node_modules\electron\dist\resources\electron.asar\renderer\security-warnings.js:170 Electron Security Warning (Insecure Content-Security-Policy) This renderer process has either no Content Security
    Policy set or a policy with "unsafe-eval" enabled. This exposes users of
    this app to unnecessary security risks.
 
For more information and help, consult
https://electronjs.org/docs/tutorial/security.
 This warning will not show up
once the app is packaged.

Node

$ nvm list

    16.10.0
    14.21.3
    14.17.0
    12.4.0
  * 10.15.3 (Currently using 64-bit executable)
    10.8.0
    10.3.0
    9.11.2
    8.9.4

Edit:

I used node 10.15.3 because that's the version chatgpt recommended me based on the dependency lists.. and the readme indicates to use node > 8. But I have no clue of which to use, nor which dependency to update in priority. Should I start with electron, then webpack & typescript ? or maybe the dependency with warnings like fluent-ffmpeg, chokidar & fsevents ?

Also is there any centralized reference for javascript dependency compatibility ? or any tool that would help me identify conflicts & resolve them ?



from electron webpack Uncaught ReferenceError: require is not defined "querystring"

ebpf kprobe argument not matching the syscall

I'm learning eBPF and I'm playing with it in order to understand it better while following the docs but there's something I don't understand why it's not working...

I have this very simple code that stops the code and returns 5.

int main() {
   exit(5);
   return 0;
}

The exit function from the code above calls the exit_group syscall as can we can see by using strace (image below) yet within my Python code that's using eBPF through bcc the output I get for my bpf_trace_printk is the value 208682672 and not the value 5 that the exit_group syscall is called with as I was expecting...

strace return

from bcc import BPF

def main():
    bpftext = """
    #include <uapi/linux/ptrace.h>

    void my_exit(struct pt_regs *ctx, int status){
        bpf_trace_printk("%d", status);
    }
    """

    bpf = BPF(text=bpftext)
    fname = bpf.get_syscall_fnname('exit_group')
    bpf.attach_kprobe(event=fname, fn_name='my_exit')

    while True:
        print(bpf.trace_fields())


if __name__ == '__main__':
    main()

I've looked into whatever I found online but I couldn't find a solution as I've been investigating this problem for a few days now...

I truly appreciate any help available and thank you!



from ebpf kprobe argument not matching the syscall

Short circuit Array.forEach like calling break

[1,2,3].forEach(function(el) {
    if(el === 1) break;
});

How can I do this using the new forEach method in JavaScript? I've tried return;, return false; and break. break crashes and return does nothing but continue iteration.



from Short circuit Array.forEach like calling break

Problem with accessing Shadow DOM Tree with Python Selenium

I am trying to access a deeply nested Shadow DOM on the page: https://express.adobe.com/tools/remove-background

Within one of the shadow DOMs is the element I need to access (the file input element).

I am currently trying this:

sptheme = driver.find_element(By.TAG_NAME, "sp-theme")
container = sptheme.find_element(By.ID, "quick-task-container")
shadow_root = container.find_element(By.TAG_NAME, "cclqt-remove-background").shadow_root
sptheme2 = shadow_root.find_element(By.TAG_NAME, "sp-theme")

I get this error due to the 4th line above:

selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator

Element Hierarchy (I believe) to the element I wish to access:

  -tag sp-theme
    -id quick-task-container
      -tag cclqt-remove-background
         -SHADOW DOM
           -tag sp-theme
             -tag cclqt-workspace
               -tag cclqt-image-upload
                 -SHADOW DOM
                   -class cclqt-file-upload__container 
                     -this should be where the element is, with the ID: 'file-input'


from Problem with accessing Shadow DOM Tree with Python Selenium

Friday, 26 May 2023

NameError: name 'mp_image' is not defined with MediaPipe gesture recognition

I am trying to utilize MediaPipe for real-time gesture recognition over a webcam. However, I want to use the gesture_recognizer.task model for inference. Here's my code:

import cv2
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision

model_path = "gesture_recognizer.task"
base_options = python.BaseOptions(model_asset_path=model_path)
GestureRecognizer = mp.tasks.vision.GestureRecognizer
GestureRecognizerOptions = mp.tasks.vision.GestureRecognizerOptions
GestureRecognizerResult = mp.tasks.vision.GestureRecognizerResult
VisionRunningMode = mp.tasks.vision.RunningMode

def print_result(result: GestureRecognizerResult, output_image: mp.Image, timestamp_ms: int):
    print('gesture recognition result: {}'.format(result))

options = GestureRecognizerOptions(
    base_options=python.BaseOptions(model_asset_path=model_path),
    running_mode=VisionRunningMode.LIVE_STREAM,
    result_callback=print_result)
recognizer = GestureRecognizer.create_from_options(options)

mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
hands = mp_hands.Hands(
        static_image_mode=False,
        max_num_hands=2,
        min_detection_confidence=0.65,
        min_tracking_confidence=0.65)

cap = cv2.VideoCapture(0)

while True:
    ret, frame = cap.read()
    if not ret:
        break
        
    i = 1  # left or right hand
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    results = hands.process(frame)
    frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
    np_array = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    
    if results.multi_hand_landmarks:
        for hand_landmarks in results.multi_hand_landmarks:
            h, w, c = frame.shape
            mp_drawing.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)
            mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=np_array)
            results = recognizer.recognize_async(mp_image)
    
    # show the prediction on the frame
    cv2.putText(mp_image, results, (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 
                   1, (0,0,255), 2, cv2.LINE_AA)
    cv2.imshow('MediaPipe Hands', frame)

    if cv2.waitKey(1) & 0xFF == 27:
        break

cap.release()

I am getting a NameError: name 'mp_image' is not defined error on the line cv2.putText(mp_image, results, (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA). By now I am really confused and not sure what I am doing, let alone what I am doing wrong. Please help!



from NameError: name 'mp_image' is not defined with MediaPipe gesture recognition

HTML iframe with dash output

I have 2 pretty simple dashboards and I would like to run this two dashboards with flask using main.py for routing.

app1.py

import dash
from dash import html, dcc

app = dash.Dash(__name__)

app.layout = html.Div(
    children=[
        html.H1('App 1'),
        dcc.Graph(
            id='graph1',
            figure={
                'data': [{'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'App 1'}],
                'layout': {
                    'title': 'App 1 Graph'
                }
            }
        )
    ]
)

and

app2.py

import dash
from dash import html, dcc

app = dash.Dash(__name__)

app.layout = html.Div(
    children=[
        html.H1('App 2'),
        dcc.Graph(
            id='graph2',
            figure={
                'data': [{'x': [1, 2, 3], 'y': [2, 4, 1], 'type': 'bar', 'name': 'App 2'}],
                'layout': {
                    'title': 'App 2 Graph'
                }
            }
        )
    ]
)

main.py

# main_app.py
from flask import Flask, render_template
import app1
import app2

app = Flask(__name__)

@app.route('/')
def index():
    return 'Main App'

@app.route('/app1')
def render_dashboard1():
    return render_template('dashboard1.html')

@app.route('/app2')
def render_dashboard2():
    return render_template('dashboard2.html')

if __name__ == '__main__':
    app.run(debug=True)

dashboard1.html

<!-- dashboard1.html -->
<!DOCTYPE html>
<html>
<head>
    <title>Dashboard 1</title>
</head>
<body>
    <h1>Dashboard 1</h1>
    <iframe src="/app1" width="1000" height="800"></iframe>
</body>
</html>

dashboard2.html

<!-- dashboard2.html -->
<!DOCTYPE html>
<html>
<head>
    <title>Dashboard 2</title>
</head>
<body>
    <h1>Dashboard 2</h1>
    <iframe src="/app2" width="1000" height="800"></iframe>
</body>
</html>

structure

/
app1.py
app2.py
main.py
/templates
dashboard1.html
dashboard2.html

but when I run my main.py and route for app1 I can see frame for the app1 but there is no graph. Could someone please explain how to use iframe to for me to be able to see output?



from HTML iframe with dash output

Comments Adapter not handling CKEditorError errors when Promise is rejected

https://ckeditor.com/docs/ckeditor5/latest/support/error-codes.html

Are these error codes integrated on the editor or should I handle them manually? if I add a comment using CommentAdapter and server throws an error, would the ckeditor UI display the error if I send the promise as rejected?

It throws a CKEditorError:

bundle.js:48916 Uncaught (in promise) CKEditorError: commentsrepository-add-comment-internal-error
Read more: https://ckeditor.com/docs/ckeditor5/latest/support/error-codes.html#error-commentsrepository-add-comment-internal-error
    at bundle.js:48916:41

but not really sure whether it has to be handled manually or UI would be responsive to this event, in the docs or samples are not demonstrations of this feature.

anyways after the CKEditorError is thrown, watchdog seems to be doing something but not really clear because the Editor Comments feature crashes and it is not restarted.

This is the React-CKEditor declaration:

<CKEditorContext context={ClassicEditor.Context}>
     <CKEditor
                    editor={ClassicEditor}
                    onReady={async editor => {
                      console.log(
                        'onReady is called after CKEditorError',
                        editor,
                      );
                    ... // no watchdogConfig
      ...
...

according to the docs the watchdog comes already on:

The React integration comes with the watchdog feature already integrated into the core.

  • Should I expect the Comments feature to display the error somehow?
  • How do I know the watchdog is watching and doing its work?
  • Why adapter addComment return Promise.reject() is crashing the entire comments functionality? According to the error it looks like I am missing something to handle, but it is not clear where.


from Comments Adapter not handling CKEditorError errors when Promise is rejected

How to edit an already created Python Script in PowerBI?

In my Power BI dashboard, I created a Python Script that accesses an API and generates a Pandas data frame.

It works fine, but how can I edit the Python code?

I thought it would be something simple, but I can't really find how to find it in the interface. If I send the .pbix file to someone, they will receive an alert that a Python script is executing and display the code nicely formatted.

I can find the code if I go to "Model Exhibition -> Edit query -> Advanced Editor" (I'm translating the options from another language, they can be somewhat different). I is a M language code and the Python script is displayed as a long line as the image below:

How python code is displayed in power bi

I believe it is possible to open a text box to edit the Python script, but can't really find it.



from How to edit an already created Python Script in PowerBI?

Wednesday, 24 May 2023

Auto ARIMA in Python results in poor fitting prediction of trend

New to ARIMA and attempting to model a dataset in Python using auto ARIMA. I'm using auto-ARIMA as I believe it will be better at defining the values of p, d and q however the results are poor and I need some guidance. Please see my reproducible attempts below

Attempt as follows:

# DEPENDENCIES
import pandas as pd 
import numpy as np 
import matplotlib.pyplot as plt
import pmdarima as pm 
from pmdarima.model_selection import train_test_split 
from statsmodels.tsa.stattools import adfuller
from pmdarima.arima import ADFTest
from sklearn.metrics import r2_score 

# CREATE DATA
data_plot = pd.DataFrame({'date':['2013-11' '2013-12'   '2014-01'   '2014-02'   '2014-03'   '2014-04'   '2014-05'   '2014-06'   '2014-07'   '2014-08'   '2014-09'   '2014-10'   '2014-11'   '2014-12'   '2015-01'   '2015-02'   '2015-03'   '2015-04'   '2015-05'   '2015-06'   '2015-07'   '2015-08'   '2015-09'   '2015-10'   '2015-11'   '2015-12'   '2016-01'   '2016-02'   '2016-03'   '2016-04'   '2016-05'   '2016-06'   '2016-07'   '2016-08'   '2016-09'   '2016-10'   '2016-11'   '2016-12'   '2017-01'   '2017-02'   '2017-03'   '2017-04'   '2017-05'   '2017-06'   '2017-07'   '2017-08'   '2017-09'   '2017-10'   '2017-11'   '2017-12'   '2018-01'   '2018-02'   '2018-03'   '2018-04'   '2018-05'   '2018-06'   '2018-07'   '2018-08'   '2018-09'   '2018-10'   '2018-11'   '2018-12'   '2019-01'   '2019-02'   '2019-03'   '2019-04'   '2019-05'   '2019-06'   '2019-07'   '2019-08'   '2019-09'   '2019-10'   '2019-11'   '2019-12'   '2020-01'   '2020-02'   '2020-03'   '2020-04'   '2020-05'   '2020-06'   '2020-07'   '2020-08'   '2020-09'   '2020-10'   '2020-11'   '2020-12'   '2021-01'   '2021-02'   '2021-03'   '2021-04'   '2021-05'   '2021-06'   '2021-07'   '2021-08'   '2021-09'   '2021-10'   '2021-11'   '2021-12'   '2022-01'   '2022-02'   '2022-03'   '2022-04'   '2022-05'   '2022-06'   '2022-07'   '2022-08'   '2022-09'   '2022-10'   '2022-11'   '2022-12'   '2023-01'   '2023-02'   '2023-03'   '2023-04'],
                     'value':[346,  21075,  82358,  91052,  95376,  100520, 107702, 116805, 124176, 136239, 140815, 159714, 172733, 197447, 297687, 288239, 281170, 277214, 278936, 279071, 288874, 293893, 299309, 319841, 333347, 371546, 488903, 468856, 460260, 452446, 448224, 441182, 438710, 437962, 441128, 455476, 462871, 517929, 627044, 601801, 579134, 576604, 554526, 547522, 559668, 561200, 564239, 583039, 595483, 656733, 750469, 719269, 720623, 712774, 699002, 692017, 695036, 709596, 720238, 717761, 719457, 763163, 825152, 786148, 765526, 752169, 740352, 724386, 708216, 709802, 691991, 698436, 697621, 736228, 779327, 752493, 795272, 780834, 741754, 729164, 713566, 676471, 646674, 656769, 651333, 664199, 644717, 604296, 591136, 571178, 556116, 523501, 522527, 520842, 495804, 504137, 483927, 516234, 491449, 461908, 441156, 437471, 416214, 395315, 390058, 380449, 369834, 373706, 361396, 381941, 358167, 335394, 325213, 312705]})

# SET INDEX
data_plot['date_index'] = pd.to_datetime(data_plot['date']
data_plot.set_index('date_index', inplace=True)

# CREATE ARIMA DATASET
arima_data = data_plot[['value']]
arima_data

# PLOT DATA
arima_data['value'].plot(figsize=(7,4))

The above steps result in a dataset that should look like this. enter image description here

# Dicky Fuller test for stationarity 
adf_test = ADFTest(alpha = 0.05)
adf_test.should_diff(arima_data)

Result = 0.9867 indicating non-stationary data which should be handled by appropriate over of differencing later in auto arima process.

# Assign training and test subsets - 80:20 split 

print('Dataset dimensions;', arima_data.shape)
train_data = arima_data[:-24]
test_data = arima_data[-24:]
print('Training data dimension:', train_data.shape, round((len(train_data)/len(arima_data)*100),2),'% of dataset')
print('Test data dimension:', test_data.shape, round((len(train_data)/len(arima_data)*100),2),'% of dataset')

# Plot training & test data
plt.plot(train_data)
plt.plot(test_data)

enter image description here

 # Run auto arima
    arima_model = auto_arima(train_data, start_p=0, d=1, start_q=0,
    max_p=5, max_d=5, max_q=5,
    start_P=0, D=1, start_Q=0, max_P=5, max_D=5,
    max_Q=5, m=12, seasonal=True,
    stationary=False,
    error_action='warn', trace=True,
    suppress_warnings=True, stepwise=True,
    random_state=20, n_fits=50)
        
    print(arima_model.aic())

Output suggests best model is 'ARIMA(1,1,1)(0,1,0)[12]' with AIC 1725.35484

#Store predicted values and view resultant df

prediction = pd.DataFrame(arima_model.predict(n_periods=25), index=test_data.index)
prediction.columns = ['predicted_value']
prediction

# Plot prediction against test and training trends 

plt.figure(figsize=(7,4))
plt.plot(train_data, label="Training")
plt.plot(test_data, label="Test")
plt.plot(prediction, label="Predicted")
plt.legend(loc='upper right')
plt.show()

enter image description here

# Finding r2 model score
    test_data['predicted_value'] = prediction 
    r2_score(test_data['value'], test_data['predicted_value'])

Result: -6.985



from Auto ARIMA in Python results in poor fitting prediction of trend

iOS Safari overriding 100vh on input focus: Injecting empty space below the DOM

When an input field is selected on iOS Safari, about ≈150px is injected, outside of the DOM, below the HTML tag

enter image description here

In the screenshot above the green border has been applied to the HTML element - and the black scrollable area beneath is only accessible on iOS safari when an input field has been focused. While the HTML element retains its 100vh/svh attribute

You can see the <HTML> tag turn blue when I hover over the element in the console... I've done this to help show that the empty space is not related to the webpage itself

The expected functionality is for the keyboard not to inject this spacing below the keyboard… as is the case for other browsers on iOS

enter image description here

I've spent some time messing around with different height values (SVH is preferred in my instance) and overflow style settings, using JS and CSS, but none of that has an impact because the space added to the webpage appears to be independent of the DOM and inaccessible via the console

Specifically, I've experimented with calculating the actual viewport height when the page loads, and then using this fixed value throughout the lifetime of the page - by applying this fixed value to the body/html element to ensure that no extra space is added at the bottom when the keyboard is displayed

CSS

body {
  --vh: 100vh; /* Fallback for browsers that do not support Custom Properties */
  height: calc(var(--vh, 1vh) * 100);
  overflow: auto;
}

JS

useEffect(() => {
  let vh = window.innerHeight * 0.01;
  document.documentElement.style.setProperty('--vh', `${vh}px`);
  
  window.addEventListener('resize', () => {
    let vh = window.innerHeight * 0.01;
    document.documentElement.style.setProperty('--vh', `${vh}px`);
  });

  return () => {
    window.removeEventListener('resize', () => {
      let vh = window.innerHeight * 0.01;
      document.documentElement.style.setProperty('--vh', `${vh}px`);
    });
  };
}, []);

But since the space injected does not appear to be a part of the DOM - updating the height of the <HTML> or <body> tags does not actually resolve the issue.

Has anyone found a solution for this?



from iOS Safari overriding 100vh on input focus: Injecting empty space below the DOM

How to add an information display button to the interactive plot toolbar

The matplotlib plot toolbar has some support for customization. This example is provided on the official documentation:

import matplotlib.pyplot as plt
from matplotlib.backend_tools import ToolBase, ToolToggleBase


plt.rcParams['toolbar'] = 'toolmanager'


class ListTools(ToolBase):
    """List all the tools controlled by the `ToolManager`."""
    default_keymap = 'm'  # keyboard shortcut
    description = 'List Tools'

    def trigger(self, *args, **kwargs):
        print('_' * 80)
        fmt_tool = "{:12} {:45} {}".format
        print(fmt_tool('Name (id)', 'Tool description', 'Keymap'))
        print('-' * 80)
        tools = self.toolmanager.tools
        for name in sorted(tools):
            if not tools[name].description:
                continue
            keys = ', '.join(sorted(self.toolmanager.get_tool_keymap(name)))
            print(fmt_tool(name, tools[name].description, keys))
        print('_' * 80)
        fmt_active_toggle = "{0!s:12} {1!s:45}".format
        print("Active Toggle tools")
        print(fmt_active_toggle("Group", "Active"))
        print('-' * 80)
        for group, active in self.toolmanager.active_toggle.items():
            print(fmt_active_toggle(group, active))


class GroupHideTool(ToolToggleBase):
    """Show lines with a given gid."""
    default_keymap = 'S'
    description = 'Show by gid'
    default_toggled = True

    def __init__(self, *args, gid, **kwargs):
        self.gid = gid
        super().__init__(*args, **kwargs)

    def enable(self, *args):
        self.set_lines_visibility(True)

    def disable(self, *args):
        self.set_lines_visibility(False)

    def set_lines_visibility(self, state):
        for ax in self.figure.get_axes():
            for line in ax.get_lines():
                if line.get_gid() == self.gid:
                    line.set_visible(state)
        self.figure.canvas.draw()


fig = plt.figure()
plt.plot([1, 2, 3], gid='mygroup')
plt.plot([2, 3, 4], gid='unknown')
plt.plot([3, 2, 1], gid='mygroup')

# Add the custom tools that we created
fig.canvas.manager.toolmanager.add_tool('List', ListTools)
fig.canvas.manager.toolmanager.add_tool('Show', GroupHideTool, gid='mygroup')

# Add an existing tool to new group `foo`.
# It can be added as many times as we want
fig.canvas.manager.toolbar.add_tool('zoom', 'foo')

# Remove the forward button
fig.canvas.manager.toolmanager.remove_tool('forward')

# To add a custom tool to the toolbar at specific location inside
# the navigation group
fig.canvas.manager.toolbar.add_tool('Show', 'navigation', 1)

plt.show()

Which opens this plot where you can hide/show some data:

![enter image description here

How can I add such a button to display some text (regarding the plot data) on a new window?



from How to add an information display button to the interactive plot toolbar

Discord.py bot joins voice channel but when using voicechannel.play i get error:Command raised an exception: ClientException: Not connected to voice

So i've managed to get my discord bot to join a voice channel but when i use the play command it gives me an error that its not conencted to voice Command raised an exception: ClientException: Not connected to voice.

Here is my code:

import discord
import random
import glob
from discord.ext import commands

##discord intents 
intents = discord.Intents()
intents.members = True
intents.messages = True
intents.guilds = True
intents.voice_states = True

connect the bot to a voice channel:

    
##called when user wants bot to join voice channel
@bot.command(name ='join', help = 'Make the bot join a voice channel')
async def join(context):
    
    botVoice = context.message.guild.voice_client
    if context.guild.voice_client:
        botvoicechannel = context.message.guild.voice_client.channel
    else:
        botvoicechannel = None
    authorVoice = context.author.voice
    if context.author.voice:
        authorvoicechannel = context.author.voice.channel
    else:
        authorvoicechannel = None
    
    ##await context.reply('bot voice channel: {}\n\nbot voice:\n{}\n\nauthor voice channel: {}\n\nauthor voice voice:\n{}\n\n'.format(botvoicechannel, botVoice, authorvoicechannel, authorVoice))
    
    if not authorVoice and not botVoice:
        await context.reply('Connect to a voice channel first.')
        return
    elif authorVoice and not botVoice:
        await context.reply('Connecting to {}'.format(authorvoicechannel))
        await authorvoicechannel.connect()
        return
    elif not authorVoice and botVoice:
        await context.reply("You aren't in a channel, I'm in {}".format(botvoicechannel))
        return
    elif authorVoice and botVoice:
        if (botvoicechannel == authorvoicechannel):
            await context.reply("I'm already in here.")
            return
        else:
            await context.reply('Moving to you!')
            await botVoice.move_to(authorvoicechannel)
            return
        return

and have it play a url:

@bot.command(name ='play', help = 'Make the bot play a url')
async def play(context, url):
    botVoice = context.message.guild.voice_client
    audioSource = discord.FFmpegPCMAudio(url, executable="ffmpeg")
    botVoice.play(audioSource, after = None)

the code works and the bot joins the voice channel. i see it in the voice channel with me, however i get an error when it gets to botVoice.play(audioSource, after = None)

the error message is: error: Command raised an exception: ClientException: Not connected to voice.

i changed botVoice = context.message.guild.voice_client from botVoice = context.guild.voice_client but that didnt seem to change anything. not sure what to try next. It seems like it wants to play the url the bot just doesnt realize its in the voice channel with me.

maybe a related error. if i kill my python script the bot remains in the channel even though its not running. then when i start it up and do the !join command it says its joining even though its already in the channel with me. its weird because on the join command it checks to see if its already in a voice channel so it should know that its in there with me. idk what to try next. thanks for any suggestions. i only posted relevant code. let me know if you think im missing something else.

thanks for the help



from Discord.py bot joins voice channel but when using voicechannel.play i get error:Command raised an exception: ClientException: Not connected to voice

Most efficient way to parse dataset generated using petastorm from parquet

Versions : Python3.7.13, Tensorflow-2.9.1, Petastorm-0.12.1

I'm trying to implement data loading framework that creates tf.data.Dataset from parquet stored in S3 using petastorm.

Creating dataset as follows:

cols = [col1_nm, col2_nm, ...]
def parse(e):
    x_features = []
    for c in cols:
        x_features.append(getattr(e,c))
    X = tf.stack(x_features, axis=1)
    y = getattr(e, 'target')
    return X, y

with make_batch_reader(s3_paths, schema_fields=cols+['target']) as reader:
    dataset = make_petastorm_dataset(reader).map(parse)
    for e in dataset.take(3):
        print(e)

All is well but want to know if there are alternative(more efficient and maintainable) way.

before parsing dataset is of type DatasetV1Adapter and each element(e) in dataset (obtained via dataset.take(1)) is of type inferred_schema_view which consist of EagerTensor for each feature.

I've tried using index to split X, y however reading last element via [-1] does not return target's eager tensor.



from Most efficient way to parse dataset generated using petastorm from parquet

Tuesday, 23 May 2023

Web Scraping Python / Error 506 Invalid Request

I am trying to scrape this website "https://ift.tt/tR6UTFx", but even though I can see the HTML elements in the inspector and download the webpage when I request it via Python, I only get that error.

Here is what I have in my script:

import requests

url_path = r'https://www.ticketweb.com/search?q='

HEADERS = {
    "Accept": "*/*",
    "Accept-Encoding": "utf-8",
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"
}

response = requests.get(url_path, headers=HEADERS)

content = response.text

print(content)

Here is the response:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
 "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
  <head>
    <title>506 Invalid request</title>
  </head>
  <body>
    <h1>Error 506 Invalid request</h1>
    <p>Invalid request</p>
    <h3>Error 54113</h3>
    <p>Details: cache-dfw-kdfw8210093-DFW 1678372070 120734701</p>
    <hr>
    <p>Varnish cache server</p>
  </body>
</html>


from Web Scraping Python / Error 506 Invalid Request

PySide6: How to clear the values of all QPlainTextEdit and QComboBox elements after inserting data into a database?

Hy, i want to clear all qtextedit,qcombobox etc.., when a product is inserted in database,but i dont know, how to do it. Right now this is my function

 def commit(self):
   
    self.inserebd()
    tree_data=self.dadostree()
    self.contarserie()
 

    sql = """INSERT INTO "Teste"("DataEntrada","Código de Barras","Numero de    serie","Categoria","Marcasss","Modelo","Fornecedores","Estado","Projeto","Valor/Uni","Quantidade")       VALUES  (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)"""`
    
    for paren_data, children_data in tree_data:   
        
        codigo_barras = paren_data
        if not children_data:
            
            dados=(self.data,codigo_barras,'No',self.idC,self.id,self.modelo,self.idf,self.estado,self.projeto,self.valor,self.quantidade)
            cursor.execute(sql,dados)
            con.commit()
            QtWidgets.QMessageBox.information(self,"Sucesso",f" {self.total_serie} Produtos Inseridos")```
                
   

Tree_data is this:

[('b', ['b']), ('c', ['c'])]

the problem is i dont know were to put this part when i delete all things:

 self.ui.treeWidget.clear()
    self.ui.textmodelo.clear()
    self.ui.textEdit.clear()
    self.ui.textEdit_Data.clear()
    self.ui.textEdit_Preco.clear()
    self.dodo=[] 
    self.ui.comboBox.setCurrentIndex(self.default_item_index)
  self.ui.comboBox_Marca.setCurrentIndex(self.default_item_index)

Because if i put inside the FOR its gonna cause problems when i insert more than one product, but if i put outside the for its gonna cause problems to, because if the insert fails, its gonna delete all data to.

And i just want to delete the textedit, comboBox etc.. when the data is already inserted. I know its a bit confuse, and its my first time posting a question, but if you have any question ask, im gona try clarify as much as I can.

thank you for the help.



from PySide6: How to clear the values of all QPlainTextEdit and QComboBox elements after inserting data into a database?

psycopg2 cursor hangs up when query time is too long

I have a problem with executing long time queries using psycopg2 in Python. When query takes more than 180 seconds the script execution hangs up for a long time.

I use Python 3.4.3 and psycopg2 2.6.1.

Here are samples to reproduce the issue:

import psycopg2

cnn = psycopg2.connect(
    database='**********',
    user='**********',
    password='**********',
    host='**********',
    port=5432,
)
print("Connected")
cursor = cnn.cursor()
seconds = 5
print("Sleep %s seconds"%seconds)
cursor.execute("SELECT pg_sleep(%s);"%seconds)
print("Exit.")

Script works fine when query takes 5 seconds:

$python3 /tmp/test.py 
Connected
Sleep 5 seconds
Exit.

But when number of seconds is about 180 and greater, the line cursor.execute hangs up and instructions below are never executed:

import psycopg2

cnn = psycopg2.connect(
    database='**********',
    user='**********',
    password='**********',
    host='**********',
    port=5432,
)
print("Connected")
cursor = cnn.cursor()
seconds = 180
print("Sleep %s seconds"%seconds)
cursor.execute("SELECT pg_sleep(%s);"%seconds)
print("Exit.")

Here is a output:

$python3 /tmp/test.py 
Connected
Sleep 5 seconds
<Never exit>

Does anyone know how to solve this problem? Thank you.



from psycopg2 cursor hangs up when query time is too long

PostCSS can't find global CSS bootstrap classes when using @extend from postcss-extend-rule in vite

I'm working on a project with (vite, vue 3, ts, and bootstrap) and I can't get "@extend" to work through postCSS.

This is a sample project i created to show as an example of what i want to do: Github repo

if you look at the file src/components/HelloWorld.vue You will see that there is a button with two bootstrap classes ("btn" and "btn-success).

What I want to do is through postCSS implement the @extend functionality to achieve something like this

<template>
  <div class="text-center">
    <button type="button" class="my-btn">Success</button>
  </div>
</template>

<style scoped>
  .my-btn {
    @extend btn btn-success;
  }

</style>

But I can't get this to work, I'm new into postCSS and I don't quite understand what configuration I'm missing to achieve what I want to do.

I have already tried these plugins

https://www.npmjs.com/package/postcss-nesting

https://www.npmjs.com/package/postcss-nested

https://www.npmjs.com/package/postcss-apply

but it seems none of them makes the trick.

Any ideas ?

EDIT: I used @extend prop since is the one I think should be the keyword, but maybe is something like @apply, not sure really.

EDIT 2: I was able to make it work using postcss-extend-rule but only if the extended class was on the same vue file style scope. I think the problem here is to make postCSS able to find bootstrap global classes.

example:

/*this will work*/
.my-class { 
  color: red; 
  background: pink; 
} 

.my-btn { 
  @extend .my-class; 
}


/*this will not work*/
.my-btn { 
  @extend .btn; 
}


from PostCSS can't find global CSS bootstrap classes when using @extend from postcss-extend-rule in vite

Android Navigation Component load 2 nested Fragments into Parent Fragment

I have a Comparator Screen which is a Fragment that is splitted into 2 sub-screens. Before using Navigation Component I could easily just:

 private void initializeFragments() {

        FoodComparatorFragment comparatorFragment1 = new FoodComparatorFragment();
        FoodComparatorFragment comparatorFragment2 = new FoodComparatorFragment();

        FragmentTransaction transaction = getSupportFragmentManager().beginTransaction();
        transaction.add(R.id.comparator_fragment_1, comparatorFragment1);
        transaction.add(R.id.comparator_fragment_2, comparatorFragment2);

        transaction.commit();
    }

However, I dont know how I should perform this operation with the Navigation Component WITHOUT using explicitly the FragmentManager.



from Android Navigation Component load 2 nested Fragments into Parent Fragment