Wednesday, 30 November 2022

Improving small images for data extraction

In Open CV or with Pillow library how can we improve below images for tesseract.

Images

Images

Images

I tried below code with multiple options like thresholding, blur, enchance, however not able to improve.

img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print(pytesseract.image_to_string(img))

img_medianBlur = cv2.blur(img, (3 , 3))

print(pytesseract.image_to_string(img_medianBlur))

blur = cv2.GaussianBlur(img,(5,5),0)

ret3,th3 = cv2.threshold(img,150,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)

print(pytesseract.image_to_string(th3))


from Improving small images for data extraction

Convert type `(env) => (args) => TaskEither` to ReaderTaskEither

In my SPA, I have a function that needs to:

  1. Create an object (e.g. a "tag" for a user)
  2. Post it to our API
type UserId = string;
type User = {id: UserId};

type TagType = "NEED_HELP" | "NEED_STORAGE"
type Tag = {
  id: string;
  type: TagType;
  userId: UserId;
}
type TagDraft = Omit<Tag, "id">

// ----

const createTagDraft = ({tagType, user} : {tagType: TagType, userId: UserID}): TagDraft => ({
  type: tagType, userId: userId
})

const postTag = (tagDraft) => pipe(
    TE.tryCatch(
      () => axios.post('https://myTagEndpoint', tagDraft),
      (reason) => new Error(`${reason}`),
    ),
    TE.map((resp) => resp.data),
  )

I can combine the entire task with

const createTagTask = flow(createTagDraft, postTag)

Now I would like to also clear some client cache that I have for Tags. Since the cache object has nothing to do with the arguments needed for the tag, I would like to provide it separately. I do:

function createTagAndCleanTask(queryCache) {
  return flow(
    createTagDraft,
    postTag,
    TE.chainFirstTaskK((flag) =>
      T.of(
        queryCache.clean("tagCache")
      )
    )
  )
}

// which I call like this
createTagAndCleanTask(queryCache)({tagType: "NEED_HELP", user: bob})

This works, but I wonder if this is not exactly what I could use ReaderTaskEither for?

Idea 1: I tried to use RTE.fromTaskEither on createTagTask, but createTagTask is a function that returns a TaskEither, not a TaskEither...

Idea 2: I tried to use RTE.fromTaskEither as a third step in the flow after postTag but I don't know how to provide proper typing then and make it aware of a env config object.

My understanding of this article is that I should aim at something like (args) => (env) => body instead of (env) => (args) => body for each functions. But I cannot find a way to invert arguments that are provided directly via flow.

Is there a way that can I rewrite this code so that I can provide env objects like queryCache in a cleaner way?



from Convert type `(env) => (args) => TaskEither` to ReaderTaskEither

How to set xAxis display certain value independent from entry using mpchart

For example, I have a data set like following:

val dataSetX = {101, 205, 210, 445, 505}
val dataSetY = {50, 100, 150, 200, 250}

The labels on the xAxis of the chart will be 101, 205, 295, 445, 505.

But I want them to be 100,200,300,400,500, without changing the data.

What do I do?



from How to set xAxis display certain value independent from entry using mpchart

Changing HTML attribute with URL

I'm not entirely sure if this is possible and have been unable to find any info on this matter, which isn't giving me much hope, but maybe I can find an answer this way. For some context, my question concerns this page. For reference: I'll be referring to "tabs" in my question, this is about the tabs towards the bottom of that page, not browser tabs.

I'm working on a revamp of the website for the company I work for as a Communications employee. As part of this revamp, we want to place an infographic on the website detailing our work process, and allow users to click it to get information about the step of the infographic they just clicked on. We use Wordpress and a free version of Elementor, which limits my ability to make any changes outside of the front-end that Wordpress/Elementor gives me.

I'm currently using Adobe Illustrator to create an image map of the infographic using Illustrator's Attributes menu, and have been able to use hash signs to make the page jump down to the text about the step by using the div id of the tab in question. However, in order to make this work, I also need to be able to actually change the open tab on the page. I've figured out that this relies on two HTML attributes needing to be changed:

  • The class attribute of both the tab that needs to close and the tab that needs to be opened needs to be adjusted. A closed tab uses elementor-tab-title elementor-tab-desktop-title, an open tab uses elementor-tab-title elementor-tab-desktop-title elementor-active.
  • The aria-expanded attribute of the tab that needs to close needs to be changed to false, while the tab that needs to be opened requries the attribute to be set to true.

Is there any way to pull this off using the URL? If not, what other methods can I use, given the limitations of the system I'm working with?

I've searched across the internet for solutions, taken a look at Elementor-focused tutorials, and searched Stack Exchange. While I have found solutions that involve JS/JQuery scripting, this is unfortunately not possible due to the limitations of the software I'm working with. If there's something that involves a URL, I can use that through image mapping, which should allow me to work around these limitations.



from Changing HTML attribute with URL

UnboundLocalError: local variable 'dist' referenced before assignment

I am trying to train a model for supervised learning for Hidden Markov Model (HMM)and test it on a set of observations however, keep getting this error. The goal is to predict the state based on the observations. How can I fix this and how can I view the transition matrix?

The version for Pomegranate is 0.14.4 Trying this from the source: https://github.com/jmschrei/pomegranate/issues/1005

from pomegranate import *
import numpy as np
# Supervised method that calculates the transition matrix:
d1 = State(UniformDistribution.from_samples([3.243221498397177, 3.210684537495482, 3.227662201472816,
    3.286410817416738, 3.290573650708864, 3.286058136226862, 3.266480693857006]))
d2 = State(UniformDistribution.from_samples([3.449282367485096, 1.97317859465635, 1.897551432353011,
     3.454609351559659, 3.127357456033111, 1.779308337786426, 3.802891929694426, 3.359766157565077, 2.959428499979418]))
d3 = State(UniformDistribution.from_samples([1.892812118441474, 1.589353118681066, 2.09269978285637,
     2.104391496570218, 1.656771181054144]))
model = HiddenMarkovModel()
model.add_states(d1, d2, d3)
# print(model.to_json())
model.bake()
model.fit([3.2, 6.7, 10.55], labels=[1, 2, 3], algorithm='labeled')
all_pred = model.predict([2.33, 1.22, 1.4, 10.6])

Error:

 File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/Users/", line 774, in <module>
    model.bake()
  File "pomegranate/hmm.pyx", line 1047, in pomegranate.hmm.HiddenMarkovModel.bake
UnboundLocalError: local variable 'dist' referenced before assignment


from UnboundLocalError: local variable 'dist' referenced before assignment

Link getting malformed while parsing email

I am getting the emails from the Gmail API in full format and then parsing it in string format so that I can extract relevant links from it. While viewing the email on the official Gmail site if I do inspect element on the button, whose redirect link I am interested in it looks something like this.

<p style="margin-top:0px;margin-bottom:20px;font-family:Arial"> 


<a href="http://delivery.ncp.flipkart.com/HCIPJTENG?id=88656=ehkDAAoOVwYCTFZTVwMNCQoAXQdTAFYBVAJVDgQFVgBfUAIHBQVXBAUCAANWAlxYUFRFAgAAA1EEUgQPBAcBDgpXUlJXBgQHDFEHUldSXVsBW1oDVAAHBQIJBEsFBFYODFACBAQDWAYBBQFUWkhQTUYTAxoaUABeW0cERU0cDVRJS1VcW0YKUkZEGgYMWRdxcSppf2FxK3UNWAVLQgE=&fl=URFHQAgZTl8aVlgME19ZS0ZNWlpYGxEdUV0IVF8dVW9NFVZzR0cCeVUFBAAMXFlzZRAOTX9XDFxReQZcdkcuQlE1cVwHYih5UGcMPA5iaVdtCFxeDF4xVXpYDnINBxNAUxZsaWR8A0BbX2sRIlYIch8HQGBTQiZwAUQrdgdKLUdQBll8BlRQcQFHGTIBTVpACy9zYFdtF2EGYztbVlNVZE5VRklmaRdCfgJYAQlzS09HVVtMfgQOeWgECQZbZFB1ABxJBH5bBgRsfX49IG5pDx8mUlN9TVUAd0I1D1VfLFkAEF5mfU5TSgd6GSkXWlZOHwVNBWZ5ImB2YxF2UFgGW3IQd3YFeRsDdVZtASd1SXNIF1IYW2VRbWdUEFsBdhlvWFNwd0QPGWxDYWNdPGdZc1gMSltjYhxAR0EgYGZdNXRvIwpdWAYHaVV7TApXTHNRYTJMGGNBEQZFcjYEZ0oiQUgBW3EDRwV+fEhNHDNtdHUKD0kDQAcLV0djL21WcAVmYzp1e0B8IlxHcgIIMVN9f2YpbkNTAjYDBwQqYUcFNgdeNVxcYhsAZH1FDT0QUm4BBVVLdmFbFV5KZzgBeVwAY00gckh+UAJceEAZOgxfekBLB29TBm4uVFV1C1J1CxoaSgQBcUFnKgdHCH83U18BUQsGDVlhTl1EbXIbZ25VEVhXDVBlW1MQVnBnTTE7QV4LXCFYdn91I1ZrBBlEekcODgoGQ0MLGxFgbH5+KjdVXm9TM2hMBUUABR8GBntDdAVoZhZpckdaJn11dkMiIVENfwIVSwdDBVV2fWlZcQRgGlNyB0FdfEQRYAZxegwyf1paZStrB1gGInheUzJ6a10hBgwpdUBBeg0LDA0=&ext=ZT10cnVl" style="background-color:rgb(41,121,251);font-family:Arial;color:#fff;border:0px;font-size:14px;display:inline-block;margin-top:0px;border-radius:2px;text-decoration:none;width:160px;line-height:32px;text-align:center" target="_blank" data-saferedirecturl="https://www.google.com/url?q=http://delivery.ncp.flipkart.com/HCIPJTENG?id%3D88656%3DehkDAAoOVwYCTFZTVwMNCQoAXQdTAFYBVAJVDgQFVgBfUAIHBQVXBAUCAANWAlxYUFRFAgAAA1EEUgQPBAcBDgpXUlJXBgQHDFEHUldSXVsBW1oDVAAHBQIJBEsFBFYODFACBAQDWAYBBQFUWkhQTUYTAxoaUABeW0cERU0cDVRJS1VcW0YKUkZEGgYMWRdxcSppf2FxK3UNWAVLQgE%3D%26fl%3DURFHQAgZTl8aVlgME19ZS0ZNWlpYGxEdUV0IVF8dVW9NFVZzR0cCeVUFBAAMXFlzZRAOTX9XDFxReQZcdkcuQlE1cVwHYih5UGcMPA5iaVdtCFxeDF4xVXpYDnINBxNAUxZsaWR8A0BbX2sRIlYIch8HQGBTQiZwAUQrdgdKLUdQBll8BlRQcQFHGTIBTVpACy9zYFdtF2EGYztbVlNVZE5VRklmaRdCfgJYAQlzS09HVVtMfgQOeWgECQZbZFB1ABxJBH5bBgRsfX49IG5pDx8mUlN9TVUAd0I1D1VfLFkAEF5mfU5TSgd6GSkXWlZOHwVNBWZ5ImB2YxF2UFgGW3IQd3YFeRsDdVZtASd1SXNIF1IYW2VRbWdUEFsBdhlvWFNwd0QPGWxDYWNdPGdZc1gMSltjYhxAR0EgYGZdNXRvIwpdWAYHaVV7TApXTHNRYTJMGGNBEQZFcjYEZ0oiQUgBW3EDRwV%2BfEhNHDNtdHUKD0kDQAcLV0djL21WcAVmYzp1e0B8IlxHcgIIMVN9f2YpbkNTAjYDBwQqYUcFNgdeNVxcYhsAZH1FDT0QUm4BBVVLdmFbFV5KZzgBeVwAY00gckh%2BUAJceEAZOgxfekBLB29TBm4uVFV1C1J1CxoaSgQBcUFnKgdHCH83U18BUQsGDVlhTl1EbXIbZ25VEVhXDVBlW1MQVnBnTTE7QV4LXCFYdn91I1ZrBBlEekcODgoGQ0MLGxFgbH5%2BKjdVXm9TM2hMBUUABR8GBntDdAVoZhZpckdaJn11dkMiIVENfwIVSwdDBVV2fWlZcQRgGlNyB0FdfEQRYAZxegwyf1paZStrB1gGInheUzJ6a10hBgwpdUBBeg0LDA0%3D%26ext%3DZT10cnVl&source=gmail&ust=1669470586360000&usg=AOvVaw2TGtpbQMv9Nx4CsQgYUmpZ">


</p>

But when I parse the email, the link is malformed. The link I am getting when parsing the email is this

http://delivery.ncp.flipkart.com/HCIPJTENG?id=88656=ehkDAAoOVwYCTFdSUQZZXwZVCAJQUlIBBghXDgxQBlUBVwQDVwFXA1EAAl1RBF0OClRFAgAAA1EEUgQPBAcBDgpXUlJXBgQHDFEHUldSXVsBW1oDVAAHBQIJBEsFBFYODFACBAQDWAYBBQFUWkhQTUYTAxoaUABeW0cERU0cDVRJS1VcW0YKUkZEGgYMWRdxcSppf2FxK3UNWAVLQgE=&fl=URFHQAgZTl8aVlgME19ZS0ZNWlpYGxEdUV0IVF8dClhsAUF8fWAUB1ZRVhwlTFBDQisOYl5dOlhQew5UXVZWVHcLW2p+YQoFewJsLTVRe2gALm0AQWVdQ3FnAkMCWC1xaQB0dB9GE1BefFUyOVZBSXQmDldwbTQGamQOAUZoJwFxJFRJdm8EUWxjWzRWe012eyJ6ZHwBBn1lUDhFY2IWXVRXWUpeeSJpcXdcHS8ZfVcHAXVmUVlcSHxzKFpVcQVjDi9HX1oDGwtnXnpRBnFbSUo0SnB7XipXfEMLR1dHVhptBkleC2EFAENGbiATfWF7czN3d3cEXQcLfgpoeWgqVQpcQ0hfXExwUGoMCAFADHdhF3tHR2UwQ2FWKGNfdBVbQTQeYndyKGpQeV0cOnZ0e0BRaHtHZ1BDR2QYVQ1qWnR6FkdWWQMOcQBWTDIWA3VyBjtLY1FwEH94YxZlbmEzD08OfEVRcxR0UgF7KwFXbWZeLAteA202SGRWBl9VfQV+XVVCRnt1JEV6CV0GGmdBU3k2dwNyZRR/Y3IQTVF0EHRUEHFkWxswUUVJYAcSeVlzACZPd3BVUllGAlVNAgAoAU0VWAVibjIEXQh4URlwUGhHBwx7U1wCXVFTFXECHw==&ext=ZT10cnVl

I am using this logic to decode the email, when fetching it in full format from Gmail API `

private fun getTextFromBodyPart(
    bodyPart: MessagePart
): String {

    var result: String = ""
    if (bodyPart.mimeType == "text/plain") {
        val r = bodyPart.body.decodeData()
        result = r.toString(Charsets.UTF_8)
    } else if (bodyPart.mimeType == "text/html") {
        val html = bodyPart.body.decodeData()
        result = html.toString(Charsets.UTF_8)
    }
    return result
}

and this

   val emailSize = email.payload.parts.size
var parsedEmail = " "

for (k in 0 until emailSize) {
    parsedEmail += getTextFromBodyPart(email.payload.parts[k])
}

And I am then using regex to extract the link from the email, the regex I am using is this ^http:\/\/delivery\..+?\.flipkart\.com\/([A-Za-z0-9\?\=&\/\\\+]+)$

Also when I analyse the decoded email, everything seems fine expect the link. Also the link in malformed even when I click on view original of this email in the official Gmail site.

Cannot really understand why is the link getting malformed.

I also tried to get the emails in raw format using Gmail Api, but the link is again malformed after decoding it..



from Link getting malformed while parsing email

Tuesday, 29 November 2022

Make API call for every item in a collection and return final results in list

I am trying to make an API call (getAccounts) that returns a list, then make a second list that by making a call(get for details of every object in that list (instead of just the first in the block below). How do I do that and then notify the observer when that series of calls is complete?

  viewModelScope.launch {
            gateway.getAccounts(

            ).fold(
                onSuccess = { v ->
                    // fetch account details for first credit account in list
                    gateway.getCreditAccountDetails(v.first().accountId, v.first().accountIdType).fold(
                        onSuccess = { v -> addAccountToList(v)}
                    )
                }
            )
        }

Coroutines functions:

   suspend fun getAccounts(): Result<List<LOCAccountInfo>> {
        return withContext(Dispatchers.IO) {
            AccountServicing.getCreditAccounts(creditAuthClient)
        }
    }

   
    suspend fun getCreditAccountDetails(accountId: AccountId, accountIdType: AccountIdType): Result<LOCAccount> {
        return getCreditAccountDetails(accountId, accountIdType, creditAuthClient)
    }


from Make API call for every item in a collection and return final results in list

jquery fullcalendar drop non event on event

i have trouble to drag and drop an item (its not an event, its an employee name) on an event.enter image description here

I want to drag an employee (see picture) on an event, but the 'employee' is no event. I am using jquery UI draggable/droppable.

<script>
var calendar;
$(document).ready(function() {
    calendar = $('#calendar').fullCalendar({
    droppable: true,
    dropAccept: '.employeeItem',    
    drop: function(date, allDay, jsEvent, ui) {

}
});
});   

$(function() {
$(".employeeItem").draggable();
$(".fc-event").droppable({ 
    drop: function(event, ui) { 

    }
});
});
</script>

<?php for($i=1;$i<=30;$i++){?>
   <div>Simpson, Homer<br>
      10 / 14 h
   </div>
<?php }?>

<div id='calendar'></div>

After dropping an employee on an event, i need the corresponding event data (event-id etc) to save the process to the db. But jquery fullcalendars "drop (callback)" only provides the date that the item was dropped on.



from jquery fullcalendar drop non event on event

How to generate arbitrary high dimensional connectivity structures for scipy.ndimage.label

I have some high dimensional boolean data, in this example an array with 4 dimensions, but this is arbitrary:

X.shape
 (3, 2, 66, 241)

I want to group the dataset into connected regions of True values, which can be done with scipy.ndimage.label, with the aid of a connectivity structure which says which points in the array should be considered to touch. The default 2-D structure is a cross:

[[0,1,0],
 [1,1,1],
 [0,1,0]]

Which can be easily extended to high dimensions if all those dimensions are connected. However I want to programmatically generate such a structure where I have a list of which dims are connected to which:

#We want to find connections across dims 2 and 3 across each slice of dims 0 and 1:
dim_connections=[[0],[1],[2,3]]

#Now we want two separate connected subspaces in our data:
dim_connections=[[0,1],[2,3]]

For individual cases I can work out with hard-thinking how to generate the correct structure tensor, but I am struggling to work out the general rule! For clarity I want something like:

mystructure=construct_arbitrary_structure(ndim, dim_connections)
the_correct_result=scipy.ndimage.label(X,structure=my_structure)


from How to generate arbitrary high dimensional connectivity structures for scipy.ndimage.label

AWS Greengrass doesn't send data to AWS Kinesis

The main purpose of my program is to connect to an incoming MQTT channel, and send the data received to my AWS Kinesis Stream called "MyKinesisStream".

Here is my code:

import argparse
import logging
import random

from paho.mqtt import client as mqtt_client
from stream_manager import (
    ExportDefinition,
    KinesisConfig,
    MessageStreamDefinition,
    ResourceNotFoundException,
    StrategyOnFull,
    StreamManagerClient, ReadMessagesOptions,
)

broker = 'localhost'
port = 1883
topic = "clients/test/hello/world"
client_id = f'python-mqtt-{random.randint(0, 100)}'
username = '...'
password = '...'

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()

args = ""

def connect_mqtt() -> mqtt_client:
    def on_connect(client, userdata, flags, rc):
        if rc == 0:
            print("Connected to MQTT Broker!")
        else:
            print("Failed to connect, return code %d\n", rc)

    client = mqtt_client.Client(client_id)
    client.username_pw_set(username, password)
    client.on_connect = on_connect
    client.connect(broker, port)
    return client


def sendDataToKinesis(
        stream_name: str,
        kinesis_stream_name: str,
        payload,
        batch_size: int = None,
):
    try:
        print("Debug: sendDataToKinesis with params:", stream_name + " | ", kinesis_stream_name, " | ", batch_size)
        print("payload:", payload)
        print("type payload:", type(payload))
    except Exception as e:
        print("Error while printing out the parameters", str(e))
        logger.exception(e)
    try:
        # Create a client for the StreamManager
        kinesis_client = StreamManagerClient()

        # Try deleting the stream (if it exists) so that we have a fresh start
        try:
            kinesis_client.delete_message_stream(stream_name=stream_name)
        except ResourceNotFoundException:
            pass

        exports = ExportDefinition(
            kinesis=[KinesisConfig(
                identifier="KinesisExport" + stream_name,
                kinesis_stream_name=kinesis_stream_name,
                batch_size=batch_size,
            )]
        )
        kinesis_client.create_message_stream(
            MessageStreamDefinition(
                name=stream_name,
                strategy_on_full=StrategyOnFull.OverwriteOldestData,
                export_definition=exports
            )
        )

        sequence_no = kinesis_client.append_message(stream_name=stream_name, data=payload)
        print(
            "Successfully appended message to stream with sequence number ", sequence_no
        )

        readValue = kinesis_client.read_messages(stream_name, ReadMessagesOptions(min_message_count=1, read_timeout_millis=1000))
        print("DEBUG read test: ", readValue)

    except Exception as e:
        print("Exception while running: " + str(e))
        logger.exception(e)
    finally:
        # Always close the client to avoid resource leaks
        print("closing connection")
        if kinesis_client:
            kinesis_client.close()


def subscribe(client: mqtt_client, args):
    def on_message(client, userdata, msg):
        print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
        sendDataToKinesis(args.greengrass_stream, args.kinesis_stream, msg.payload, args.batch_size)

    client.subscribe(topic)
    client.on_message = on_message


def run(args):
    mqtt_client_instance = connect_mqtt()
    subscribe(mqtt_client_instance, args)
    mqtt_client_instance.loop_forever()


def parse_args() -> argparse.Namespace:
    parser = argparse.ArgumentParser()
    parser.add_argument('--greengrass-stream', required=False, default='...')
    parser.add_argument('--kinesis-stream', required=False, default='MyKinesisStream')
    parser.add_argument('--batch-size', required=False, type=int, default=500)
    return parser.parse_args()

if __name__ == '__main__':
    args = parse_args()
    run(args)

(the dotted parts ... are commented out as they are sensitive information, but they are correct values.)

The problem is that it just won't send any data to our kinesis stream. I get the following STDOUT from the run:

2022-11-25T12:13:47.640Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Connected to MQTT Broker!. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Received `{"machineId":2, .... "timestamp":"2022-10-24T12:21:34.8777249Z","value":true}` from `clients/test/hello/world` topic. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Debug: sendDataToKinesis with params: test |  MyKinesisStream  |  100. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. payload: b'{"machineId":2,... ,"timestamp":"2022-10-24T12:21:34.8777249Z","value":true}'. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. type payload: <class 'bytes'>. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Successfully appended message to stream with sequence number  0. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. DEBUG read test:  [<Class Message. stream_name: 'test', sequence_number: 0, ingest_time: 1669376980985, payload: b'{"machineId":2,"mach'>]. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. closing connection. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}

So we can see that the data arrives from MQTT, the python code executes the append message, and it seems that my kinesis streams have the information as it can read it in the next step... then closes the connection without any error.

But the problem is, that from AWS side, we cannot see the data arriving on the stream: screnshot of the aws console

What can be the problem here? Our greengrass core is configured properly, can be accessed from the AWS, and the Component is running and healthy also: Screenshot of IoT Core status Screenshot of the state if the StreamManager component



from AWS Greengrass doesn't send data to AWS Kinesis

Thousands of OkHttp related issues reported daily (connection, unknown host, dns, etc)

Setup

I have an app in production used by thousands of users daily.

I'm using retrofit2 version 2.9.0 (latest)

My build.gradle below.

def retrofitVersion = '2.9.0'
api "com.squareup.retrofit2:converter-gson:${retrofitVersion}"
api "com.squareup.retrofit2:converter-scalars:${retrofitVersion}"
api "com.squareup.retrofit2:adapter-rxjava2:${retrofitVersion}"
api "com.squareup.retrofit2:retrofit:${retrofitVersion}"

I integrated Firebase Crashlytics and made it so that app would report any API related exceptions in try-catch blocks.

e.g.

viewModelScope.launch {
    try {
        val response = myRepository.getProfile()
        if (response.isSuccessful) {
            // continue with some business logic
        } else {
            Log.e(tag, "error", RunTimeException("some error")
        }
    } catch (throwable: Throwable){
        Log.e(tag, "error thrown", throwable)
        crashlytics.recordException(throwable)
    }
}

Knowns

Now in Crashlytics, I get THOUSANDS of reports daily saying there were some errors. Before I get to those errors, I want to assure you that users ARE connected to internet with proper network permissions. I see logs that users are opening other contents at the time. So these errors seem to be really random.

Erros

  1. UnknownHostException
Non-fatal Exception: java.net.UnknownHostException: Unable to resolve host "my-host-address.com": No address associated with hostname
       at java.net.Inet6AddressImpl.lookupHostByName(Inet6AddressImpl.java:156)
       at java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:103)
       at java.net.InetAddress.getAllByName(InetAddress.java:1152)
       at okhttp3.Dns$Companion$DnsSystem.lookup(Dns.java:5)
...
Caused by android.system.GaiException: android_getaddrinfo failed: EAI_NODATA (No address associated with hostname)
       at libcore.io.Linux.android_getaddrinfo(Linux.java)
       at libcore.io.ForwardingOs.android_getaddrinfo(ForwardingOs.java:74)
       at libcore.io.BlockGuardOs.android_getaddrinfo(BlockGuardOs.java:200)
       at libcore.io.ForwardingOs.android_getaddrinfo(ForwardingOs.java:74)
       at java.net.Inet6AddressImpl.lookupHostByName(Inet6AddressImpl.java:135)
       at java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:103)
...
  1. ConnectionException
Non-fatal Exception: java.net.ConnectException: Failed to connect to my-host-address.com/123.123.123.123:443
       at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:146)
       at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:191)
       at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:257)
       at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java)
       at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:47)
...
Caused by java.net.ConnectException: failed to connect to my-host-address.com/123.123.123.123 (port 443) from /:: (port 0) after 10000ms: connect failed: ENETUNREACH (Network is unreachable)
       at libcore.io.IoBridge.connect(IoBridge.java:142)
       at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:142)
       at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:390)
       at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230)
       at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212)
       at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:436)
       at java.net.Socket.connect(Socket.java:621)
...
Caused by android.system.ErrnoException: connect failed: ENETUNREACH (Network is unreachable)
       at libcore.io.Linux.connect(Linux.java)
       at libcore.io.ForwardingOs.connect(ForwardingOs.java:94)
       at libcore.io.BlockGuardOs.connect(BlockGuardOs.java:138)
       at libcore.io.ForwardingOs.connect(ForwardingOs.java:94)
       at libcore.io.IoBridge.connectErrno(IoBridge.java:173)
       at libcore.io.IoBridge.connect(IoBridge.java:134)
...
  1. SocketTimeoutException
Non-fatal Exception: java.net.SocketTimeoutException: timeout
       at okhttp3.internal.http2.Http2Stream$StreamTimeout.newTimeoutException(Http2Stream.java:4)
       at okhttp3.internal.http2.Http2Stream$StreamTimeout.exitAndThrowIfTimedOut(Http2Stream.java:8)
       at okhttp3.internal.http2.Http2Stream.takeHeaders(Http2Stream.java:24)
       at okhttp3.internal.http2.Http2ExchangeCodec.readResponseHeaders(Http2ExchangeCodec.java:5)
       at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.java:2)
       at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:145)
...
  1. Another SocketTimeoutException
Non-fatal Exception: java.net.SocketTimeoutException: SSL handshake timed out
       at com.android.org.conscrypt.NativeCrypto.SSL_do_handshake(NativeCrypto.java)
       at com.android.org.conscrypt.NativeSsl.doHandshake(NativeSsl.java:387)
       at com.android.org.conscrypt.ConscryptFileDescriptorSocket.startHandshake(ConscryptFileDescriptorSocket.java:234)
       at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:72)
       at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:52)
       at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:196)
       at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:257)
       at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java)
       at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:47)

And lastly, what makes me think it's not my server issue is that I get this kind of error when I request banner ads to Google server as well. I get thousands of reports of the following

{   "Message": "Error while connecting to ad server: Failed to connect to pubads.g.doubleclick.net/216.58.195.130:443",   "Cause": "null",   "Response Info": {     "Adapter Responses": [],     "Response ID": "null",     "Response Extras": {},     "Mediation Adapter Class Name": ""   },   "Domain": "com.google.android.gms.ads",   "Code": 0 }

from Google ads SDK's onAdFailedToLoad listener.

Attempt

I tried to find some solutions in Retrofit2/OkHttp3 github issues, SO community, and everyone says there may be some network permission issues or network connection problem itself. But I know users are connected to internet and not using some sort of proxy. I worked with customer service team and they walked through with users, and they did not find any network issues.

Any insight would be helpful. Thank you in advance!



from Thousands of OkHttp related issues reported daily (connection, unknown host, dns, etc)

Monday, 28 November 2022

Process a large file using Apache Airflow Task Groups

I need to process a zip file(that contains a text file) using task groups in airflow. No. of lines can vary from 1 to 50 Million. I want to read the text file in the zip file process each line and write the processed line to another text file, zip it, update Postgres tables and call another DAG to transmit this new zip file to an SFTP server.

Since a single task can take more time to process a file with millions of lines, I would like to process the file using a task group. That is, a single task in the task group can process certain no. of lines and transform them. For ex. if we receive a file with 15 Million lines, 6 task groups can be called to process 2.5 Million lines each.

But I am confused how to make the task group dynamic and pass the offset to each task. Below is a sample that I tried with fixed offset in islice(),

def start_task(**context):
    print("starting the Main task...")


def apply_transformation(line):
    return f"{line}_NEW"


def task1(**context):
    data = context['dag_run'].conf
    file_name = data.get("file_name")
    with zipfile.ZipFile(file_name) as zf:
        for name in zf.namelist():
            with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp:
                for record in islice(fp, 1, 2000000):
                    apply_transformation(record)


def task2(**context):
    data = context['dag_run'].conf
    file_name = data.get("file_name")
    with zipfile.ZipFile(file_name) as zf:
        for name in zf.namelist():
            with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp:
                for record in islice(fp, 2000001, 4000000):
                    apply_transformation(record)


def task3(**context):
    data = context['dag_run'].conf
    file_name = data.get("file_name")
    with zipfile.ZipFile(file_name) as zf:
        for name in zf.namelist():
            with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp:
                for record in islice(fp, 4000001, 6000000):
                    apply_transformation(record)


def task4(**context):
    data = context['dag_run'].conf
    file_name = data.get("file_name")
    with zipfile.ZipFile(file_name) as zf:
        for name in zf.namelist():
            with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp:
                for record in islice(fp, 6000001, 8000000):
                    apply_transformation(record)


def task5(**context):
    data = context['dag_run'].conf
    file_name = data.get("file_name")
    with zipfile.ZipFile(file_name) as zf:
        for name in zf.namelist():
            with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp:
                for record in islice(fp, 8000001, 10000000):
                    apply_transformation(record)


def final_task(**context):
    print("This is the final task to update postgres tables and call SFTP DAG...")


with DAG("main",
         schedule_interval=None,
         default_args=default_args, catchup=False) as dag:

    st = PythonOperator(
        task_id='start_task',
        dag=dag,
        python_callable=start_task
    )

    with TaskGroup(group_id='task_group_1') as tg1:
        t1 = PythonOperator(
            task_id='task1',
            python_callable=task1,
            dag=dag,
        )

        t2 = PythonOperator(
            task_id='task2',
            python_callable=task2,
            dag=dag,
        )

        t3 = PythonOperator(
            task_id='task3',
            python_callable=task3,
            dag=dag,
        )

        t4 = PythonOperator(
            task_id='task4',
            python_callable=task4,
            dag=dag,
        )

        t5 = PythonOperator(
            task_id='task5',
            python_callable=task5,
            dag=dag,
        )

    ft = PythonOperator(
        task_id='final_task',
        dag=dag,
        python_callable=final_task
    )

    st >> tg1 >> ft

After applying transformation to each line, I want to get these transformed lines from different tasks and merge them into a new file and do rest of the operations in the final_task.

Or are there any other methods to process large files with millions of lines in parallel?



from Process a large file using Apache Airflow Task Groups

Sunday, 27 November 2022

Selenium driver hanging on OS alert

I'm using Selenium in Python (3.11) with a Firefox (107) driver.

With the driver I navigate to a page which, after several actions, triggers an OS alert (prompting me to launch a program). When this alert pops up, the driver hangs, and only once it is closed manually does my script continue to run.

I have tried driver.quit(), as well as using

os.system("taskkill /F /pid " + str(process.ProcessId))

with the driver's PID, with no luck.

I have managed to prevent the pop-up from popping up with

options.set_preference("security.external_protocol_requires_permission", False)

but the code still hangs the same way at the point where the popup would have popped up.

I don't care whether the program launches or not, I just need my code to not require human intervention at this key point.

here is a minimal example of what I currently have:

from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver

options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)

# Go to the page
driver.get(url)

user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)

#this is the point where the pop up appears

reqs = driver.requests

print("Success!")
driver.quit()


from Selenium driver hanging on OS alert

Issues viewing or sharing items upload to Sharepoint with msgraph in Python

I've been trying to upload files to a Sharepoint site using a python app.

I've successfully authenticated with my Azure app, using msgraph.

I've successfully (I think) uploaded files -

/drives/{drive_id}/root:/path/to/file:/content

returns

@microsoft.graph.downloadUrl: https://xxx.sharepoint.com/_layouts/15/download.aspx?UniqueId=xxx&Translate=false&tempauth=exxx&ApiVersion=2.0

createdDateTime: 2022-11-23T19:41:22Z
eTag: "{xxx},52"
id: xxx
lastModifiedDateTime: 2022-11-25T08:37:46Z
name: 2022-09.pdf
webUrl: https://xxx.sharepoint.com/NPSScheduling/All%20NPS%20Folders/Salesforce/2022-09.pdf
cTag: "c:{xxx},52"
size: 33097
createdBy: {'application': {'id': 'f333ebf7-899a-44aa-8697-6d5b71e8f722', 'displayName': 'Salesforce Chrono UPloads'}, 'user': {'displayName': 'SharePoint App'}}
lastModifiedBy: {'application': {'id': 'f333ebf7-899a-44aa-8697-6d5b71e8f722', 'displayName': 'Salesforce Chrono UPloads'}, 'user': {'displayName': 'SharePoint App'}}
parentReference: {'driveType': 'documentLibrary', 'driveId': 'b!xxx', 'id': 'xxx', 'path': '/drives/b!xxx/root:/All NPS Folders/Salesforce'}
file: {'mimeType': 'application/pdf', 'hashes': {'quickXorHash': 'nEGcsGbiYw5Q1OZfcBOg+2pbGts='}}
fileSystemInfo: {'createdDateTime': '2022-11-23T19:41:22Z', 'lastModifiedDateTime': '2022-11-25T08:37:46Z'}
shared: {'scope': 'users'}

However, when I try to view files in my Sharepoint folder, I don't see them. I am able to navigate to the webUrl, but get 'Could not load pdf'.

I tried creating a share link, but keep getting

item not found

no matter which format I try for the endpoint:

/sites/<site_id>/drive/items/<item_id>/createLink
/sites/<site_id>/drives/<drive_id>/items/<item_id>/createLink
/sites/<site_id>/path/to/file/createLink
/drives/<drive_id>/root:/path/to/file/createLink

and other variants on the same theme.

Any ideas?



from Issues viewing or sharing items upload to Sharepoint with msgraph in Python

Expo eas: Android build fails if I run a prebuild before

I use Expo 46.

I would like to change some config in my AndroidManifest so I run an npx expo prebuild that generates an android folder without error.

But then my eas build is not working anymore (it is if I don't run prebuild).
I get this error:

Failed to find 'build.gradle' file for project: /home/expo/workingdir/build/android/app.

Am I missing something?



from Expo eas: Android build fails if I run a prebuild before

How to extract all timestamps of badminton shot sound in an audio clip using Neural Networks?

I am trying to find the instances in a source audio file taken from a badminton match where a shot was hit by either of the players. For the same purpose, I have marked the timestamps with positive (hit sounds) and negative (no hit sound: commentary/crowd sound etc) labels like so:

shot_timestamps = [0,6.5,8, 11, 18.5, 23, 27, 29, 32, 37, 43.5, 47.5, 52, 55.5, 63, 66, 68, 72, 75, 79, 94.5, 96, 99, 105, 122, 115, 118.5, 122, 126, 130.5, 134, 140, 144, 147, 154, 158, 164, 174.5, 183, 186, 190, 199, 238, 250, 253, 261, 267, 269, 270, 274] 
shot_labels = ['no', 'yes', 'yes', 'yes', 'yes', 'yes', 'no', 'no', 'no', 'no', 'yes', 'yes', 'yes', 'yes', 'yes', 'no', 'no','no','no', 'no', 'yes', 'yes', 'no', 'no', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'no', 'no', 'no', 'no', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'yes', 'yes', 'no', 'no', 'yes', 'yes', 'no'] 

I have been taking 1 second windows around these timestamps like so:

rate, source = wavfile.read(source) 
def get_audio_snippets(shot_timestamps): 

    shot_snippets = []  # Collection of all audio snippets in the timestamps above 

    for timestamp in shot_timestamps: 
        start = math.ceil(timestamp*rate)
        end = math.ceil((timestamp + 1)*rate)
        if start >= source.shape[0]: 
            start = source.shape[0] - 1

        if end >= source.shape[0]: 
            end = source.shape[0] - 1  

        shot_snippets.append(source[start:end]) 
        
    return shot_snippets

and converting that to spectrogram images for the model. The model doesn't seem to be learning anything with an accuracy of around 50%. What can I do to improve the model?



from How to extract all timestamps of badminton shot sound in an audio clip using Neural Networks?

How to access secrets in Javascript GitHub actions?

I am developing a reusable workflow using Javascript actions by following this tutorial. My action.yml looks like this.

name: "Test"
description: "Reusable workflow"
inputs:
  input-one:
    required: false
    type: string

runs:
  using: 'node16'
  main: 'dist/index.js'

But my question is how to access the secrets in dist/index.js?. Please note that I don't want the user to supply the secret as input, I would like to store the secret in my reusable workflow repository and use it whenever it's needed.

I tried to change the action.yml with env(So that I can use node process.env API to get the secret) but it's failing with an error saying that Unexpected value 'env'.

name: "Test"
description: "Reusable workflow"
inputs:
  input-one:
    required: false
    type: string

runs:
  using: 'node16'
  main: 'dist/index.js'
  env: 
    DUMMY_VAL: $


from How to access secrets in Javascript GitHub actions?

Why doesn't mean square error work in case of angular data?

Suppose, the following is a dataset for solving a regression problem:

H   -9.118   5.488   5.166   4.852   5.164   4.943   8.103  -9.152  7.470  6.452  6.069  6.197  6.434  8.264  9.047         2.222
H    5.488   5.166   4.852   5.164   4.943   8.103  -9.152  -8.536  6.452  6.069  6.197  6.434  8.264  9.047 11.954         2.416 
C    5.166   4.852   5.164   4.943   8.103  -9.152  -8.536   5.433  6.069  6.197  6.434  8.264  9.047 11.954  6.703         3.028
C    4.852   5.164   4.943   8.103  -9.152  -8.536   5.433   4.924  6.197  6.434  8.264  9.047 11.954  6.703  6.407        -1.235
C    5.164   4.943   8.103  -9.152  -8.536   5.433   4.924   5.007  6.434  8.264  9.047 11.954  6.703  6.407  6.088        -0.953 
H    4.943   8.103  -9.152  -8.536   5.433   4.924   5.007   5.057  8.264  9.047 11.954  6.703  6.407  6.088  6.410         2.233
H    8.103  -9.152  -8.536   5.433   4.924   5.007   5.057   5.026  9.047 11.954  6.703  6.407  6.088  6.410  6.206         2.313
H   -9.152  -8.536   5.433   4.924   5.007   5.057   5.026   5.154 11.954  6.703  6.407  6.088  6.410  6.206  6.000         2.314
H   -8.536   5.433   4.924   5.007   5.057   5.026   5.154   5.173  6.703  6.407  6.088  6.410  6.206  6.000  6.102         2.244 
H    5.433   4.924   5.007   5.057   5.026   5.154   5.173   5.279  6.407  6.088  6.410  6.206  6.000  6.102  6.195         2.109 

the left-most column is the class data. The rest of the features are all angular data.

My initial setup for the model was as follows:

def create_model(n_hidden_1, n_hidden_2, num_features):
    # create the model
    model = Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=(num_features,)))
    model.add(tf.keras.layers.Dense(n_hidden_1, activation='relu'))
    model.add(tf.keras.layers.Dense(n_hidden_2, activation='relu'))
    model.add(tf.keras.layers.Dense(1))

    # instantiate the optimizer
    opt = keras.optimizers.Adam(learning_rate=LEARNING_RATE)

    # compile the model
    model.compile(
         loss="mean_squared_error",
         optimizer=opt,
         metrics=["mean_squared_error"]
    )

    # return model
    return model

This model didn't produce the correct outcome.

Someone told me that MSE doesn't work in the case of angular data. So, I need to use a custom output layer and a custom error function.

Why doesn't mean square error work in the case of angular data?

How can I solve this issue?



from Why doesn't mean square error work in case of angular data?

Saturday, 26 November 2022

Is there a way to add custom data into ListAPIView in django rest framework

So I've built an API for movies dataset which contain following structure:

Models.py

class Directors(models.Model):
    id = models.IntegerField(primary_key=True)
    first_name = models.CharField(max_length=100, blank=True, null=True)
    last_name = models.CharField(max_length=100, blank=True, null=True)

    class Meta:
        db_table = 'directors'
        ordering = ['-id']

class Movies(models.Model):
    id = models.IntegerField(primary_key=True)
    name = models.CharField(max_length=100, blank=True, null=True)
    year = models.IntegerField(blank=True, null=True)
    rank = models.FloatField(blank=True, null=True)

    class Meta:
        db_table = 'movies'       
        ordering = ['-id']

class Actors(models.Model):
    id = models.IntegerField(primary_key=True)
    first_name = models.CharField(max_length=100, blank=True, null=True)
    last_name = models.CharField(max_length=100, blank=True, null=True)
    gender = models.CharField(max_length=20, blank=True, null=True)

    class Meta:
        db_table = 'actors'       
        ordering = ['-id']

class DirectorsGenres(models.Model):
    director = models.ForeignKey(Directors,on_delete=models.CASCADE,related_name='directors_genres')
    genre = models.CharField(max_length=100, blank=True, null=True)
    prob = models.FloatField(blank=True, null=True)    

    class Meta:
        db_table = 'directors_genres'       
        ordering = ['-director']

class MoviesDirectors(models.Model):
    director = models.ForeignKey(Directors,on_delete=models.CASCADE,related_name='movies_directors')
    movie = models.ForeignKey(Movies,on_delete=models.CASCADE,related_name='movies_directors')
    class Meta:
        db_table = 'movies_directors'       
        ordering = ['-director']
        


class MoviesGenres(models.Model):
    movie = models.ForeignKey(Movies,on_delete=models.CASCADE,related_name='movies_genres')
    genre = models.CharField(max_length=100, blank=True, null=True)    

    class Meta:
        db_table = 'movies_genres'        
        ordering = ['-movie']


class Roles(models.Model):
    actor = models.ForeignKey(Actors,on_delete=models.CASCADE,related_name='roles')
    movie = models.ForeignKey(Movies,on_delete=models.CASCADE,related_name='roles')    
    role = models.CharField(max_length=100, blank=True, null=True)    

    class Meta:
        db_table = 'roles'        
        ordering = ['-actor']

urls.py

from django.urls import path, include
from . import views
from api.views import getMovies, getGenres, getActors

urlpatterns = [ 
    path('', views.getRoutes),    
    path('movies/', getMovies.as_view(), name='movies'),    
    path('movies/genres/', getGenres.as_view(), name='genres'),
    path('actor_stats/<pk>', getActors.as_view(), name='actor_stats'),    
]

serializer.py

from rest_framework import serializers
from movies.models import *

class MoviesSerializer(serializers.ModelSerializer):
    class Meta:
        model = Movies
        fields = '__all__'

class DirectorsSerializer(serializers.ModelSerializer):
    class Meta:
        model = Directors
        fields = '__all__'

class ActorsSerializer(serializers.ModelSerializer):
    class Meta:
        model = Actors
        fields = '__all__'

class DirectorsGenresSerializer(serializers.ModelSerializer):
    class Meta:
        model = DirectorsGenres
        fields = '__all__'

class MoviesDirectorsSerializer(serializers.ModelSerializer):
    movie = MoviesSerializer(many = False)
    director = DirectorsSerializer(many = False)
    class Meta:
        model = MoviesDirectors
        fields = '__all__'

class MoviesGenresSerializer(serializers.ModelSerializer):
    movie = MoviesSerializer(many = False)
    class Meta:
        model = MoviesGenres
        fields = '__all__'

class RolesSerializer(serializers.ModelSerializer):
    movie = MoviesSerializer(many = False)
    actor = ActorsSerializer(many = False)
    class Meta:
        model = Roles
        fields = '__all__'

views.py

class getMovies(ListAPIView):
    directors = Directors.objects.all()  
    queryset = MoviesDirectors.objects.filter(director__in=directors)
    serializer_class = MoviesDirectorsSerializer
    pagination_class = CustomPagination
    filter_backends = [DjangoFilterBackend]
    filterset_fields = ['director__first_name', 'director__last_name']

class getGenres(ListAPIView):
    movies = Movies.objects.all()  
    queryset = MoviesGenres.objects.filter(movie__in=movies).order_by('-genre')
    serializer_class = MoviesGenresSerializer
    pagination_class = CustomPagination
    filter_backends = [DjangoFilterBackend]
    filterset_fields = ['genre']

class getActors(ListAPIView):
    queryset = Roles.objects.all()
    serializer_class = RolesSerializer
    pagination_class = CustomPagination

    def get_queryset(self):
        return super().get_queryset().filter(
            actor_id=self.kwargs['pk']
        )

Now I want to count number of movies by genre that actor with specific pk played in getActors class. Like the number of movies by genre that actor participated in. E.g. Drama: 2, Horror: 3 Right now I am getting the overall count of movies count: 2:

GET /api/actor_stats/17


HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "count": 2,
    "next": null,
    "previous": null,
    "results": [
        {
            "id": 800480,
            "movie": {
                "id": 105231,
                "name": "Everybody's Business",
                "year": 1993,
                "rank": null
            },
            "actor": {
                "id": 17,
                "first_name": "Luis Roberto",
                "last_name": "Formiga",
                "gender": "M"
            },
            "role": "Grandfather"
        },
        {
            "id": 800481,
            "movie": {
                "id": 242453,
                "name": "OP Pro 88 - Barra Rio",
                "year": 1988,
                "rank": null
            },
            "actor": {
                "id": 17,
                "first_name": "Luis Roberto",
                "last_name": "Formiga",
                "gender": "M"
            },
            "role": "Himself"
        }
    ]
}

What is the optimized way of achieving the following:

  • number_of_movies_by_genre
  • Drama: 2
  • Horror: 3

UPDATE

I was able to add actor id and movie id in serializers:

class RolesSerializer(serializers.ModelSerializer):
    movie = MoviesSerializer(many = False)
    actor = ActorsSerializer(many = False)
    movie_id = serializers.SerializerMethodField()
    actor_id = serializers.SerializerMethodField()
    class Meta:
        model = Roles
        fields = '__all__'

    def get_movie_id(self, obj):        
        return obj.movie_id
        
    def get_actor_id(self, obj):        
        return obj.actor.id

result:

{
    "next": null,
    "previous": null,
    "count": 2,
    "pagenum": null,
    "results": [
        {
            "id": 800480,
            "movie": {
                "id": 105231,
                "name": "Everybody's Business",
                "year": 1993,
                "rank": null
            },
            "actor": {
                "id": 17,
                "first_name": "Luis Roberto",
                "last_name": "Formiga",
                "gender": "M"
            },
            "movie_id": 105231,
            "actor_id": 17,
            "role": "Grandfather"
        },
        {
            "id": 800481,
            "movie": {
                "id": 242453,
                "name": "OP Pro 88 - Barra Rio",
                "year": 1988,
                "rank": null
            },
            "actor": {
                "id": 17,
                "first_name": "Luis Roberto",
                "last_name": "Formiga",
                "gender": "M"
            },
            "movie_id": 242453,
            "actor_id": 17,
            "role": "Himself"
        }
    ]
}

Now I don't know how to even count in how many movies actor with specific id played so to able even go further and get number of movies by genre.



from Is there a way to add custom data into ListAPIView in django rest framework

Android Leanback: How to update nested rows item in RowsSupportFragment

Hey Guys

I'm working on androidTV application using leanback library.

I should show list of categories that each category has it's own list of contents. For this approach leanback offered RowsSupportFragment that you can show this type of UI inside that.

enter image description here

Here I am using Room + LiveData + Retrofit + Glide to perform and implement the screen, but the issue is here, the api will not pass content cover images directly, so developer should download each content cover image, store it and then show covert over the content.

Every thing is working but at the first time, If there is no cover image for content, I will download the cover and store it, but content will not be triggered to get and show image. Using notifyItemRangeChanged and methods like this will blink and reset the list row so this is not a good solution.

This is my diffUtils that I'm using, one for category list, one for each contents list.

private val diffCallback = object : DiffCallback<CardListRow>() {
    override fun areItemsTheSame(oldItem: CardListRow, newItem: CardListRow): Boolean {
        return oldItem.id == newItem.id
    }

    override fun areContentsTheSame(oldItem: CardListRow, newItem: CardListRow): Boolean {
        return oldItem.cardRow.contents?.size == newItem.cardRow.contents?.size
    }
}

private val contentDiffCallback = object : DiffCallback<ContentModel>() {
    override fun areItemsTheSame(oldItem: ContentModel, newItem: ContentModel): Boolean {
        return oldItem.id == newItem.id
    }

    override fun areContentsTheSame(oldItem: ContentModel, newItem: ContentModel): Boolean {
        return oldItem.hashCode() == newItem.hashCode()
    }

}

As I said, for storage I'm using room, retrieving data as LiveData and observing them in my fragment and so on. I have not posted all the codes for summarization.

If you have any idea or similar source code, I would appreciate it. Thanks



from Android Leanback: How to update nested rows item in RowsSupportFragment

Spherical Graph Layout in Python

Objective

Target:

enter image description here

State of work

  • Input data as given factor
  • NetworkX for position calculation
  • Handover to VTK methods for 3D visualisation

Problem

3 years ago, I had achieved the visualisation as shown above. Unfortunately, I did a little bit of too much cleaning and I just realized, that I dont have these methods anymore. It is somehow a force-directed graph on a sphere surface. Maybe similar to the "strong gravity" parameter in the 2D forceatlas. I have not found any 3D implementation of this yet.

I tried again with the following algorithms, but none of them has produced this layout, neither have parameter tuning of these algorithms (or did I miss an important one?):

  • NetworkX: Spherical, Spring, Shell, Kamada Kawaii, Fruchterman-Reingold (the 2D fruchterman-reingold in Gephi looks like it could come close to the target in a 3D version, yet gephi does not support 3D or did I oversee something?)
  • ForceAtlas2
  • Gephi (the 2D fruchterman-reingold looks like a circle, but this is not available in 3D, nor does the 3D Force Atlas produce valid Z-Coordinates (they are within a range of +1e-4 and -1e-4)
  • Found this online one (https://observablehq.com/@fil/3d-graph-on-sphere) but however it seems like it is not completely achieving the target, nor do I know how to transfer this to python/coordinates

Researching for "spherical graph layout" has not brought me to any progress (only to this view which seems very similar https://observablehq.com/@fil/3d-graph-on-sphere ).

How can I achieve this spherical layout using python (or a third party which provides a positioning information)

Update: I made some progress and found the keywords non-euclidean, hyperbolic and spherical force-directed algorithms, however still not achieved anything yet. Or Non-Euclidean Riemann Embeddings (https://www2.cs.arizona.edu/~kobourov/riemann_embedders.pdf)



from Spherical Graph Layout in Python

Can I setup a simple job queue with celery on a plotly dashboard?

I have a dashboard very similar to this one-

import datetime
import dash
from dash import dcc, html
import plotly
from dash.dependencies import Input, Output

# pip install pyorbital
from pyorbital.orbital import Orbital
satellite = Orbital('TERRA')

external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']

app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div(
    html.Div([
        html.H4('TERRA Satellite Live Feed'),
        html.Div(id='live-update-text'),
        dcc.Graph(id='live-update-graph'),
        dcc.Interval(
            id='interval-component',
            interval=1*1000, # in milliseconds
            n_intervals=0
        )
    ])
)


# Multiple components can update everytime interval gets fired.
@app.callback(Output('live-update-graph', 'figure'),
              Input('live-update-graph', 'relayout'),
              Input('interval-component', 'n_intervals'))
def update_graph_live(relayout, n):
    if ctx.triggered_id == 'relayout':
        * code that affects the y axis * 
        return fig 
    else:
        satellite = Orbital('TERRA')
        data = {
            'time': [],
            'Latitude': [],
            'Longitude': [],
            'Altitude': []
        }

        # Collect some data
        for i in range(180):
            time = datetime.datetime.now() - datetime.timedelta(seconds=i*20)
            lon, lat, alt = satellite.get_lonlatalt(
                time
            )
            data['Longitude'].append(lon)
            data['Latitude'].append(lat)
            data['Altitude'].append(alt)
            data['time'].append(time)

        # Create the graph with subplots
        fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2)
        fig['layout']['margin'] = {
            'l': 30, 'r': 10, 'b': 30, 't': 10
        }
        fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'}

        fig.append_trace({
            'x': data['time'],
            'y': data['Altitude'],
            'name': 'Altitude',
            'mode': 'lines+markers',
            'type': 'scatter'
        }, 1, 1)
        fig.append_trace({
            'x': data['Longitude'],
            'y': data['Latitude'],
            'text': data['time'],
            'name': 'Longitude vs Latitude',
            'mode': 'lines+markers',
            'type': 'scatter'
        }, 2, 1)

        return fig


if __name__ == '__main__':
    app.run_server(debug=True)

I want to setup a job queue. Right now, the "code that affects the y axis" part never runs because the interval component fires before it finishes processing. I want to setup logic that says "add every callback to a queue and then fire them one at a time in the order that they were called".

Two questions

1- Can I achieve this with celery?

2- If so, what does a small working example look like?



from Can I setup a simple job queue with celery on a plotly dashboard?

Android Jetpack Compose width / height / size modifier vs requiredWidth / requiredHeight / requiredSize

Android Jetpack Compose contains width(), height() and size() layout modifiers as well as requiredWidth(), requiredHeight() and requiredSize(). What is the difference between these two sets of modifiers? Should I use plain modifiers or required ones?



from Android Jetpack Compose width / height / size modifier vs requiredWidth / requiredHeight / requiredSize

How to run Django channels with StreamingHttpResponse in ASGI

I have a simple app that streams images using open cv and the server set in wsgi. But whenever I introduce Django channels to the picture and change from WSGI to ASGI the streaming stops. How can I stream images from cv2 and in the same time use Django channels? Thanks you in advance

My code for streaming:

def camera_feed(request):
    stream = CameraStream()
    frames = stream.get_frames()
    return StreamingHttpResponse(frames, content_type='multipart/x-mixed-replace; boundary=frame')

settings.py:

ASGI_APPLICATION = 'photon.asgi.application'

asgi.py

application = ProtocolTypeRouter({
'http': get_asgi_application(),
'websocket': AuthMiddlewareStack(URLRouter(ws_urlpatterns))
})


from How to run Django channels with StreamingHttpResponse in ASGI

Friday, 25 November 2022

Changing notes position based on new bpm

I have a .mboy json file that are build with bpm 128 which is the audio original bpm.

have a look below at the file.

 export default {
"editor": "mboy-editor-2.1.1",
"format_version": "2.0",
"audio": {
    "artist": "test",
    "title": "NO COPYRIGHT SHORT MUSIC (SOLO RECORD)",
    "album": "",
    "subgenre": "",
    "date": "",
    "download_link": "https://www.youtube.com/watch?v=Atv-zwhSyFE",
    "comments": "",
    "genre": "Other"
},
"author": "<zx<zx",
"date": "2022-11-17",
"tempo": 128,
"start_pos": 0,
"tracks": [{
    "instrument": "bass",
    "name": "",
    "color": "ff009fff",
    "bars": [{
        "index": 0,
        "quarters_count": 4,
        "notes": [{
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 800,
            "len": 100,
            "markers": []
        }, {
            "pos": 1100,
            "len": 100,
            "markers": []
        }, {
            "pos": 1200,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 1,
        "quarters_count": 4,
        "notes": [{
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 1000,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 2,
        "quarters_count": 4,
        "notes": [{
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 3,
        "quarters_count": 4,
        "notes": [{
            "pos": 1000,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 4,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 1200,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 5,
        "quarters_count": 4,
        "notes": [{
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 6,
        "quarters_count": 4,
        "notes": [{
            "pos": 300,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 1200,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 7,
        "quarters_count": 4,
        "notes": [{
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 8,
        "quarters_count": 4,
        "notes": [{
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 700,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 9,
        "quarters_count": 4,
        "notes": [{
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 700,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }, {
            "pos": 1500,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 10,
        "quarters_count": 4,
        "notes": [{
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 700,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 11,
        "quarters_count": 4,
        "notes": [{
            "pos": 900,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 12,
        "quarters_count": 4,
        "notes": [{
            "pos": 0,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 13,
        "quarters_count": 4,
        "notes": [{
            "pos": 0,
            "len": 100,
            "markers": []
        }, {
            "pos": 1200,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 14,
        "quarters_count": 4,
        "notes": [{
            "pos": 800,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 15,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 1000,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 16,
        "quarters_count": 4,
        "notes": []
    }, {
        "index": 17,
        "quarters_count": 4,
        "notes": [{
            "pos": 500,
            "len": 100,
            "markers": ["ToLeft"]
        }
        ]
    }, {
        "index": 18,
        "quarters_count": 4,
        "notes": []
    }, {
        "index": 19,
        "quarters_count": 4,
        "notes": []
    }, {
        "index": 20,
        "quarters_count": 4,
        "notes": []
    }
    ]
}, {
    "instrument": "drums",
    "name": "",
    "color": "ff009fff",
    "bars": [{
        "index": 0,
        "quarters_count": 4,
        "notes": [{
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 800,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 1,
        "quarters_count": 4,
        "notes": [{
            "pos": 800,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 2,
        "quarters_count": 4,
        "notes": [{
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 3,
        "quarters_count": 4,
        "notes": [{
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 1000,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 4,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 600,
            "len": 100,
            "markers": []
        }, {
            "pos": 1000,
            "len": 100,
            "markers": []
        }, {
            "pos": 1500,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 5,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 1100,
            "len": 100,
            "markers": []
        }, {
            "pos": 1200,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 6,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 700,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }, {
            "pos": 1400,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 7,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 600,
            "len": 100,
            "markers": []
        }, {
            "pos": 800,
            "len": 100,
            "markers": []
        }, {
            "pos": 1000,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 8,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 600,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }, {
            "pos": 1100,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 9,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 1100,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 10,
        "quarters_count": 4,
        "notes": [{
            "pos": 0,
            "len": 100,
            "markers": []
        }, {
            "pos": 100,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 900,
            "len": 100,
            "markers": []
        }, {
            "pos": 1000,
            "len": 100,
            "markers": []
        }, {
            "pos": 1300,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 11,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 500,
            "len": 100,
            "markers": []
        }, {
            "pos": 800,
            "len": 100,
            "markers": []
        }, {
            "pos": 1200,
            "len": 100,
            "markers": []
        }
        ]
    }, {
        "index": 12,
        "quarters_count": 4,
        "notes": [{
            "pos": 200,
            "len": 100,
            "markers": []
        }, {
            "pos": 400,
            "len": 100,
            "markers": []
        }, {
            "pos": 1100,
            "len": 100,
            "markers": []
        }
        ]
    }
    ]
}
],
"markers": [{
    "id": "extended",
    "code": "*",
    "color": "ffffff00"
}, {
    "id": "ToLeft",
    "code": "<",
    "color": "ffffff00"
}, {
    "id": "ToRight",
    "code": ">",
    "color": "ffffff00"
}
]
}

my game play just fine with the current file, but what I want is to make it slower and that mean I have to change the bpm from 128 to 240.

I have the current pos and len for the bpm 128.

I want to know if I can change the position of the notes based on the new bpm.

I do not want to change the audio file only the notes above so it can follow the new speed and still use the same audio file.



from Changing notes position based on new bpm

Android Device that supports concurrent capture from two rear facing cameras

I wanted to know if anyone has had success concurrently capturing images or videos on an Android device from 2 rear cameras, using this API: https://source.android.com/docs/core/camera/concurrent-streaming . The phone I had available to test (S21) supported the API, but the only pairs of cameras supported were front+rear facing combos. In kotlin, the code to get the supported pairs is:

val cameraManager = applicationContext.getSystemService(Context.CAMERA_SERVICE) as CameraManager
val concurrentCameras = cameraManager.concurrentCameraIds

Hoping someone has a newer Samsung, Xiaomi, or Huawei phone to test if the API is supported? Thanks a lot!



from Android Device that supports concurrent capture from two rear facing cameras

Can setup.py / pip require a certain version of another package IF that package is already installed?

I have two python packages (locust-swarm and locust-plugins). Neither has a strict requirement to the other, but they can work together, and my users install them separately.

Sometimes there is a breaking change in one or the other, and I want to make sure nobody installs incompatible versions (by updating package A but not package B, for example). Is there a way to specify a minimum version of this "pseudo-dependency" and fail the install if it is not satisfied? A check that is only done if the other package is already installed.

I do not want to add one package as a dependency of the other and force users of package A to install package B, just to be able to handle this case.

Probably this question has been asked before, but I couldnt find an answer.



from Can setup.py / pip require a certain version of another package IF that package is already installed?

How to Update User With JWT Token When he Tries To Login in Nodejs and Reactjs with MongoDB

I am trying to create a login functionality for my Reactjs Webiste using Nodejs express backend.

I want to set a JWT token when the user tries to log in and update that token in my mongoDB database and then verify the token on the frontend and save it to localStorage.

However, when the user tries to log in after registration, it returns back the result without the token, and thus not allowing the user to log in, unless he clicks the login button again, then my code would generate and update the user with the JWT token.

Why is this behavior happening? Why is the first response only returning the found user from the findOne() operation when i am resolving the result from the findOneAndUpdate operation?

Here is my code:

User Model:

login(params) {
  

    params.email = params.email.toLowerCase();

    return new Promise((resolve, reject) => {
      db.collection("Users").findOne({ email: params.email }).then((response) => {

          console.log(response)
          if(response) {
            bcrypt.compare(params.password, response.password, (err, success) => {
              if(success) {
                let token = jwt.sign({
                  name: response.name,
                  id: response._id
                }, proccess.env.JWT_SECRET);

                db.collection("Users").findOneAndUpdate({
                  email: params.email
                }, {
                  $set: { token: token, lastLogin: new Date() },
                }, function (e, s) {
                  if(e) {
                    console.log(e)
                    reject(e)
                  } else {
                    console.log("updated")
                    resolve(s)
                  }
                })
              } else {
                reject({msg: 'Incorrect email or password.'})
              }
            })
          } else {
  
            reject({msg: 'cannot log in user'});
          }

      })
    })
  }

UserController:

router.post('/login', (req, res) => {

    let User = new models.User()
    let processes = [];
    processes.push(function (callback) {
        User.login(req.body).then(function (response) {
           
                callback(null, response);
        }, function (error) {
            console.log(error)
            callback(error);
        });
    });

    async.waterfall(processes, function (error, data) {
        if (!error) {
            return res.json({
                statusCode: 200,
                msg: 'User logged in successfully.',
                result: data
            });
        } else {
            return res.json({
                statusCode: 401,
                msg: 'Cannot login user.',
                error: error
            });
        }
    });

})

React Login.js:

const login = () => {
    axios.post('/login', data).then(async (response) => {
      console.log(response)
      if(response && response.data.result.value.token ) {
        localStorage.setItem("authUser", JSON.stringify(response.data.result.value.token))
        history.push("/")
        console.log(response.data.result)
      } else {
        console.log("ERROR")
      }
    })
  }


from How to Update User With JWT Token When he Tries To Login in Nodejs and Reactjs with MongoDB

Query influxdb with special character

I am trying to perform. simple get on influxdb using python. The connection works great and I am able to query several values. However, I have one of them which is reported as homeassistant.autogen.°C. When I try to query it, I always get

influxdb.exceptions.InfluxDBClientError: 400: {"error":"error parsing query: found \u00b0, expected identifier at line 1, char 43"}

The code that is use is:

client = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password')
results = client.query(r'SELECT "value" FROM homeassistant.autogen."°C" WHERE entity_id = sensor.x_temperature')

I already tried to escape and pass it through quotes but nothing seems to work.

I cannot change how the value is inserted in influxdb.



from Query influxdb with special character

Thursday, 24 November 2022

How can I handle the code purchase.sku when I upgrade Google Play Billing Library 3 to 4?

I use Google Play in-app in my app based the official sample project.

The Code A is to handle non-consumable products, it works well when I launch it using com.android.billingclient:billing-ktx:3.0.3 .

After I upgrade the project from Google Play Billing Library 3 to 4, I find the code purchase.sku doesn't work, so I have to replace it with purchase.skus.

The code of purchase.skus can be compiled in com.android.billingclient:billing-ktx:4.0.0, but I can't get the correct order, the test purchase is refunded after 3 minutes, it seems that Google Play doesn't acknowledge the purchase.

How can I fix the Code A when I upgrade Google Play Billing Library 3 to 4 ?

Code A

private fun processPurchases(purchasesResult: Set<Purchase>) {
        val validPurchases = HashSet<Purchase>(purchasesResult.size)
        purchasesResult.forEach { purchase ->
            if (purchase.purchaseState == Purchase.PurchaseState.PURCHASED) {
                if (purchase.sku.equals(purchaseItem)) {
                //if (purchase.skus.equals(purchaseItem)) {     //sku -> skus  in 4.0
                    if (isSignatureValid(purchase)) {
                        validPurchases.add(purchase)
                    }
                }
            } else if (purchase.purchaseState == Purchase.PurchaseState.PENDING) {
                Log.d(LOG_TAG, "Received a pending purchase of SKU: ${purchase.sku}")
                // handle pending purchases, e.g. confirm with users about the pending
                // purchases, prompt them to complete it, etc.
                mContext.toast(R.string.msgOrderPending)

            } else {
                mContext.toast(R.string.msgOrderError)
            }
        }

        acknowledgeNonConsumablePurchasesAsync(validPurchases.toList())
    }


from How can I handle the code purchase.sku when I upgrade Google Play Billing Library 3 to 4?

Better arrow cone rotation using ThreeJs

I'm drawing an arrow between pick and place points using QuadraticBezierCurve3 and ConeGeometry, but the rotation of the cone is not perfect as you can see in the snippet down!

Can you please tell me how can I improve the visualization (cone rotation) of my arrow function? thanks in advance.

var camera, scene, renderer;

init();
animate();

 /**
 draw arrow
*/
function drawArrow(pick_pos, place_pos, scene) {
    /**
    * Curve
    */
    const start = new THREE.Vector3();
    start.add(pick_pos)
    
    const finish = new THREE.Vector3();
    finish.add(place_pos)

    let mid = new THREE.Vector3();
    mid.add(start);
    let dist = finish.x + mid.x;
    mid.x = dist/2;
    mid.y += 3;

const b_curve = new THREE.QuadraticBezierCurve3(
    start,
    mid,
    finish
);
const points = b_curve.getPoints( 100 );
const line_geometry = new THREE.BufferGeometry().setFromPoints( points );
const material = new THREE.LineBasicMaterial( { color: 0x0ffffb, linewidth:5 } );
const curve = new THREE.Line( line_geometry, material );
/**
 * Cone
 */
const cone_geometry = new THREE.ConeGeometry( 0.2, 1, 8 );
const cone_material = new THREE.MeshBasicMaterial({ color: 0xff00fb });

const cone = new THREE.Mesh( cone_geometry, cone_material );
cone.position.set(points[98].x, points[98].y, points[98].z);
cone.rotateX(Math.PI * points[100].x); 
cone.rotateZ(Math.PI * points[100].z);

/**
 * Arrow
 */

const arrow = new THREE.Group();
arrow.add(curve);
arrow.add(cone);
arrow.name = 'arrow';
scene.add(arrow);
}

/**
 Create the scene, camera, renderer
*/
function init() {
  scene = new THREE.Scene();
  scene.background = new THREE.Color(0x21252d);
  renderer = new THREE.WebGLRenderer({antialias: true});
  renderer.setPixelRatio(window.devicePixelRatio);
  renderer.setSize(window.innerWidth, window.innerHeight);
  document.body.appendChild(renderer.domElement);

  camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 1000);
  camera.position.x = 0;
  camera.position.y = 8;
  camera.position.z = 10;
  scene.add(camera);
  
  controls = new THREE.OrbitControls(camera, renderer.domElement);
  
  const pick = new THREE.Vector3(0, 0, 0);
  const place = new THREE.Vector3(5, 0, 5);
  drawArrow(pick, place, scene);
  window.addEventListener('resize', onWindowResize);

}

function onWindowResize() {
 camera.aspect = window.innerWidth / window.innerHeight;
 camera.updateProjectionMatrix();
 renderer.setSize(window.innerWidth, window.innerHeight);
}

function animate() {
  requestAnimationFrame(animate);
  render();
}

function render() {
  renderer.render(scene, camera);
}
<script src="https://cdn.jsdelivr.net/npm/three@0.122.0/build/three.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three@0.122.0/examples/js/controls/OrbitControls.min.js"></script>


from Better arrow cone rotation using ThreeJs

FCM: Is data message delivery really less reliable than notification message delivery?

Question

I have come across some voices stating that FCM data message delivery is less consistent compared to that of notification messages. Does anyone have direct experience or can point me to resources exploring the issue? Or is a notification message just a collapsible, high-priority data message that the Firebase SDK handles automatically?

The question does not consider the case of force quitting the app. In this scenario, both types of messages will not be delivered (to my knowledge).

Background

I am writing a new Android SDK for a push service provider (similar to OneSignal). The SDK should handle the display of push notifications by default, optionally the client app can handle incoming pushes itself.

The actual delivery is of course done by Firebase Cloud Messaging (on devices running Play Services). So there are 2 types of messages to choose from on FCM: data vs notification messages.

As data messages are consistently handled by the registered FirebaseMessagingService (provided there is no notification key in the payload), this should be the way to go for the SDK. [See documentation] So far, I have not been able to produce a situation in which a data message was not delivered (foreground or background).



from FCM: Is data message delivery really less reliable than notification message delivery?

petite-vue mount and unmount events with nested v-if

I'm trying to understand why I'm getting multiple/extra mount and unmount events in petite-vue. I've seen similar questions with regard to Vue proper (and components), and I think they are in the same realm, but still a bit confused.

It seems that when I do get all the multiple mount events, there is only one event triggered where the element being mounted is has el.isConnected=true so I could maybe use that as a trigger as to whether or not to run the attached event handler, but I'm worried about performance as the number of events build and build while modifying the model state.

Is the a different way I should be writing these so I don't get multiple events ?

Scenarios :

  1. Click Edit, then Cancel: Why two unmounts for the v-for="item in model.beneficiaries" ?
  2. Click Edit, then Save: All OK. Click Accept: Why isConnected = false mount on 'Edit Screen' ?
  3. Repeat cycles of Edit, Cancel or Edit Save, then Accept and read notes about acceptClicks.

JSFiddle

<script type="module">
    import { createApp, reactive } from 'https://unpkg.com/petite-vue?module'

    const state = reactive({
        setMode: function (benefit) {
            var beneficiaries = benefit == undefined ? [] : [ "Spouse", "Son" ];
            this.model.beneficiaries = beneficiaries;
            this.model.currentBenefit = benefit;
            this.model.showConfirm = false;
        },
        startEdit: function (benefit) {
            console.log("Edit Clicked");
            this.setMode(benefit);
        },
        cancelEdit: function () {
            console.log("Cancel Clicked");
            this.setMode();
        },
        saveBenefit: function () {
            console.log("Save Clicked");
            this.model.showConfirm = true;
        },
        acceptBenefit: function () {
            console.log("Accept Clicked");
            this.model.acceptClicks++;
            this.setMode();
        },
        model: {
            acceptClicks: 0,
            currentBenefit: undefined,
            showConfirm: false,
            beneficiaries: []
        }
    });

    createApp(state).mount('#app')
</script>

<div id="app" v-scope>
    <div>State:</div>
    <div>currentBenefit: </div>
    <div>showConfirm: </div>
    <div>beneficiaries: </div>
    <div>acceptClicks: </div>
    <div v-if="model.acceptClicks > 0">
        <div>After each Accept</div>
        <ul>
            <li>When clicking Edit,  isConnected=false mounts for 'Has Beneficiaries', 'Spouse' and 'Son'</li>
            <li>When clicking Edit, then Cancel, 2 unmounts for 'Spouse' and 'Son'; 1 unmounts for 'Has Beneficiaries'</li>
        </ul>
    </div>

    <div v-if="model.currentBenefit != undefined">
        <h3>Editing </h3>
        <div v-if="!model.showConfirm">
            <div @vue:mounted="console.log('Mount: ' + $el.innerHTML + ', isConnected: ' + $el.isConnected)" @vue:unmounted="console.log('Unmount: ' + $el.innerHTML)">Edit Screen</div>
            <div v-if="model.beneficiaries.length > 0">
                <div @vue:mounted="console.log('Mount: ' + $el.innerHTML + ', isConnected: ' + $el.isConnected)" @vue:unmounted="console.log('Unmount: ' + $el.innerHTML)">Has Beneficiaries</div>
                <div v-for="item in model.beneficiaries">
                    <div @vue:mounted="console.log('Mount: ' + $el.innerHTML + ', isConnected: ' + $el.isConnected)" @vue:unmounted="console.log('Unmount: ' + $el.innerHTML)"></div>
                </div>
            </div>

            <button type="button" @click.prevent="cancelEdit">Cancel</button>
            <button type="button" @click.prevent="saveBenefit">Save Benefit</button>
        </div>
        <div v-if="model.showConfirm">
            <button type="button" @click.prevent="acceptBenefit">Accept Confirm</button>
        </div>
    </div>
    <div v-if="model.currentBenefit == undefined">
        <h3>No Benefit</h3>
        <button type="button" @click.prevent="startEdit($event, { name: 'Health Benefit' })">Edit Benefit</button>
    </div>
</div>


from petite-vue mount and unmount events with nested v-if