Sunday, 30 April 2023

Listener for barcode scanning event

How can I implement a listener in Java for my Android foreground service that can detect barcode scanning event and save a resulting barcode to a text file, or display a message when barcode scanned with the barcode.

I already created a foreground service. Is it even possible to have a foreground service with a listener which listens to events when other app (warehouse app) scans a barcode? Our company only needs to additionally to save a barcode with time of scanning to a database.



from Listener for barcode scanning event

Crashlytics upload symbols- "java command failed with args" -how fix it?

I have flutter app. I working on mac. I used obfuscation for my appbundle. I get logs for Crashlytics in unreadable form. I found that it is necessary to load symbols on Crashlytics(Firebase). I installed Firebase cli and got my app id. I run it command:

firebase crashlytics:symbols:upload --app myId debug-info/app.android-arm64.symbols

But I have error, This is logs:

i  Generating symbols for debug-info-droid/app.android-arm64.symbols
i  Generated symbols for debug-info-droid/app.android-arm64.symbols
.......
i  Uploading all generated symbols...
[CRASHLYTICS LOG DEBUG] PUT headers:
[CRASHLYTICS LOG DEBUG]         User-Agent = firebase-cli;crashlytics-buildtools/2.9.2
[CRASHLYTICS LOG DEBUG]         X-CRASHLYTICS-API-CLIENT-TYPE = firebase-cli;crashlytics-buildtools
[CRASHLYTICS LOG DEBUG]         X-CRASHLYTICS-API-CLIENT-VERSION = 2.9.2
[CRASHLYTICS LOG DEBUG] PUT response: [reqId=null] 400
......
response: 400 HTTP/1.1 400 Bad Request]
        at com.google.firebase.crashlytics.buildtools.api.RestfulWebApi.sendFile(RestfulWebApi.java:109)
        at com.google.firebase.crashlytics.buildtools.api.RestfulWebApi.uploadFile(RestfulWebApi.java:119)
        at com.google.firebase.crashlytics.buildtools.api.FirebaseSymbolFileService.uploadNativeSymbolFile(FirebaseSymbolFileService.java:35)
        at com.google.firebase.crashlytics.buildtools.Buildtools.uploadNativeSymbolFiles(Buildtools.java:301)
        at com.google.firebase.crashlytics.buildtools.CommandLineHelper.executeUploadSymbols(CommandLineHelper.java:194)
        at com.google.firebase.crashlytics.buildtools.CommandLineHelper.executeCommand(CommandLineHelper.java:120)
        at com.google.firebase.crashlytics.buildtools.CommandLineHelper.main(CommandLineHelper.java:65)
        at com.google.firebase.crashlytics.buildtools.Buildtools.main(Buildtools.java:111)
Error: java command failed with args: -jar,/Users/rockstar/.cache/firebase/crashlytics/buildtools/crashlytics-buildtools-2.9.2.jar,-symbolGenerator,breakpad,-symbolFileCacheDir,/var/folders/jg/../nativeSymbols/.../breakpad,-verbose,-uploadNativeSymbols,-googleAppId,myId,-clientName,
firebase-cli;crashlytics-buildtools

how can i fix this problem?

is it related to java settings on my mac?

maybe i can use another solution to uploading symbols?



from Crashlytics upload symbols- "java command failed with args" -how fix it?

How to have multiple users run the same pipeline in snakemake - metadata permissions issue

I have a pipeline that we want to store in a shared space so that all our analysts can access it and use it without having to copy the repo / singularity container. The problem is when one person runs the pipeline the files in .snakemake/metadata belong to that user. When the next user tries to run the pipeline this metadata is accessed and causes an error.

I know I could delete the data in metadata at the end of the process but ideally we want users to be able to run the pipeline in parallel (on different inputs / output). All users are within the same group but this doesn't not help

Any suggestions?



from How to have multiple users run the same pipeline in snakemake - metadata permissions issue

How can I implement a jitter buffer in javascript for realtime audio processing

I have a sender an a receiver running on localhost

both emit and receive on an average interval of 2.89 and 2.92 milliseconds.

the correct constant interval should be 2.90;

So I tried implementing some sort of ringbuffer where buffer size is 128 * N where 128 is the size of the data chunk and N the number of data chunks kept in memory (latency).

I haven't evaluated the time interval between the receival and the processing but according to the data I've managed to calculate a buffer of size N = 2 should be sufficient.

Also my buffer is quite simple it's a FIFO and if a value is received while the buffer is full it replaces the previous chunk.

To get a correct sound I need to make a buffer size of N = 16 so my question is how can I implement a low latency jitter buffer that may be adaptative

I guess currently the buffer adds extra memory for managing an average variation but what I would need is a technique to correct the variation "in place".

my current implementation

_buffer = [];

BUFFER_SIZE = 8;

_isStarted = false;

readIndex = -1

//this callback is triggered any time a packet is received.

_onReceivePacket = ( event ) => {

    let chunks = [];

    //chunk length = 128

    for ( let chunk of event.data ) {

      chunks.push( new Float32Array( chunk ) );

    }

    if ( this._buffer.length < this.BUFFER_SIZE ) {

      this._buffer.unshift( chunks );

      this.readIndex++;

    } else {

      this._buffer.splice( 0, 1 );

      this._buffer.unshift( chunks );

      this._isStarted = true;

    }

}
  //this function copies the buffer into the output stream

  _pullOut ( output ) {

    try {

      for ( let i = 0; i < output.length; i++ ) {

        const channel = output[ i ];

        for ( let j = 0; j < channel.length; j++ ) {

          channel[ j ] = this._buffer[ this.readIndex ][ i ][ j ];

        }

      }

      if ( this.readIndex - 1 !== -1 ) {

        this._buffer.splice( this.readIndex, 1 );

        this.readIndex--;

      }

    } catch ( e ) {

      console.log( e, this._buffer, this.readIndex );

    }

  }


from How can I implement a jitter buffer in javascript for realtime audio processing

I am trying to start Foreground service on older API version. Work on APIs 26+

I need to start my app service foreground. My code work fine with API level 26 and newer, but not with older APIs level. On older version Service is shown in running service but don't send the starting notification. What I need to change to my code, why isn't working?

public void onCreate() {
        super.onCreate();
        messageIntent.setAction(getString(R.string.receiver_receive));



        NotificationCompat.Builder builder = new NotificationCompat.Builder(this, NOTIFICATION_CHANNEL_ID_DEFAULT)
                .setOngoing(false).setSmallIcon(R.drawable.ic_launcher).setPriority(Notification.PRIORITY_MIN);

        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
            NotificationManager notificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
            NotificationChannel notificationChannel = new NotificationChannel(NOTIFICATION_CHANNEL_ID_DEFAULT,
                    NOTIFICATION_CHANNEL_ID_DEFAULT, NotificationManager.IMPORTANCE_LOW);
            notificationChannel.setDescription(NOTIFICATION_CHANNEL_ID_DEFAULT);
            notificationChannel.setSound(null, null);
            notificationManager.createNotificationChannel(notificationChannel);
            startForeground(1, builder.build());
        }

    }

Starting the Service

protected void onStart() {
        super.onStart();
        // Bind to LocalService
        Intent intent = new Intent(this, SocketService.class);
        bindService(intent, serviceConnection, Context.BIND_AUTO_CREATE);

        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
            ContextCompat.startForegroundService(this, new Intent(this, SocketService.class));
        else
            this.startService(new Intent(this, SocketService.class));
    }


from I am trying to start Foreground service on older API version. Work on APIs 26+

Saturday, 29 April 2023

Android GLES - Flame shader

I'm quite starter of using GLES 2.0 on Android and I am trying to create a Flame-shaped shader using GLSL. I tried applying the shader from the following link: https://www.shadertoy.com/view/MdKfDh

However, the result I got is not satisfactory.

Flame Shader

The image I created looks like the flames are flowing diagonally and the shape is too stretched vertically, while the flame shader on "shadertoy" gives the impression of flames bursting out. The glsl fragment code I wrote is like this:

precision mediump float;

#define timeScale           iTime * 1.0
#define fireMovement        vec2(-0.01, -0.5)
#define distortionMovement  vec2(-0.01, -0.3)
#define normalStrength      40.0
#define distortionStrength  0.1

uniform vec2 screenSize;
uniform float progress;

// #define DEBUG_NORMAL

/** NOISE **/
float rand(vec2 co) {
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

vec2 hash( vec2 p ) {
    p = vec2( dot(p,vec2(127.1,311.7)),
    dot(p,vec2(269.5,183.3)) );

    return -1.0 + 2.0*fract(sin(p)*43758.5453123);
}

float noise( in vec2 p ) {
    const float K1 = 0.366025404; // (sqrt(3)-1)/2;
    const float K2 = 0.211324865; // (3-sqrt(3))/6;

    vec2 i = floor( p + (p.x+p.y)*K1 );

    vec2 a = p - i + (i.x+i.y)*K2;
    vec2 o = step(a.yx,a.xy);
    vec2 b = a - o + K2;
    vec2 c = a - 1.0 + 2.0*K2;

    vec3 h = max( 0.5-vec3(dot(a,a), dot(b,b), dot(c,c) ), 0.0 );

    vec3 n = h*h*h*h*vec3( dot(a,hash(i+0.0)), dot(b,hash(i+o)), dot(c,hash(i+1.0)));

    return dot( n, vec3(70.0) );
}

float fbm ( in vec2 p ) {
    float f = 0.0;
    mat2 m = mat2( 1.6,  1.2, -1.2,  1.6 );
    f  = 0.5000*noise(p); p = m*p;
    f += 0.2500*noise(p); p = m*p;
    f += 0.1250*noise(p); p = m*p;
    f += 0.0625*noise(p); p = m*p;
    f = 0.5 + 0.5 * f;
    return f;
}

/** DISTORTION **/
vec3 bumpMap(vec2 uv, vec2 resolution) {
    vec2 s = 1. / resolution;
    float p =  fbm(uv);
    float h1 = fbm(uv + s * vec2(1., 0));
    float v1 = fbm(uv + s * vec2(0, 1.));

    vec2 xy = (p - vec2(h1, v1)) * normalStrength;
    return vec3(xy + .5, 1.);
}

vec3 constructCampfire(vec2 resolution, vec2 normalized, float time) {
    vec3 normal = bumpMap(normalized * vec2(1.0, 0.3) + distortionMovement * time, resolution);

    vec2 displacement = clamp((normal.xy - .5) * distortionStrength, -1., 1.);
    normalized += displacement;

    vec2 uvT = (normalized * vec2(1.0, 0.5)) + time * fireMovement;
    float n = pow(fbm(8.0 * uvT), 1.0);

    float gradient = pow(1.0 - normalized.y, 2.0) * 5.;
    float finalNoise = n * gradient;
    return finalNoise * vec3(2.*n, 2.*n*n*n, n*n*n*n);
}

void main() {
    vec2 resolution = screenSize;
    vec2 normalized = gl_FragCoord.xy / resolution;
    vec3 campfire = constructCampfire(resolution, normalized, progress);
    gl_FragColor = vec4(campfire, 1.0);
}

As you can see, the GLSL code I wrote is very similar to the code in "shadertoy". The difference is, I pass in the screenSize(pixels) and progress(float seconds) from the Android side. For your information, the code above is fragment shader, and I've already adjust vertex shader using gl_Position for entire screen. I am not sure why the result is not what I expected.

Also, I have another problem where after about 10 seconds, the image becomes pixelated and gradually turns into a square flame.

Pixelated Flame shader

It seems to be a problem when the progress value becomes too large, but I am not sure of the reason, and don't know how to fix it.

If anyone knows the cause and solution to these issues, I would greatly appreciate your response. Thank you.



from Android GLES - Flame shader

python-docx-template: Construct Word table in python and place into a specific location in a Word template

I wish to construct a Microsoft Word table using Python and then place it into a location marked by a placeholder in the Word template. Is this possible to do using docx and docx template?

The code below is a basic example of what I'm trying to do, but I'm unsure of how to place the table into the word template.

I know I can create a basic table using Jinja2 tags and add rows to the table using the render method, but the table creation logic is a bit involved (merging of certain cells, and empty spaces to separate related rows), so I'd prefer to construct the table using Python, and not have to use Jinja2 for that. Any help is appreciated!

from docxtpl import DocxTemplate
from docx.table import Table

doc = DocxTemplate('template.docx')

items = [
    {'column1': 'Item 1-1', 'column2': 'Item 1-2', 'column3': 'Item 1-3'},
    {'column1': 'Item 2-1', 'column2': 'Item 2-2', 'column3': 'Item 2-3'},
    {'column1': 'Item 3-1', 'column2': 'Item 3-2', 'column3': 'Item 3-3'}
]

rows = []
for item in items:
    row = [item['column1'], item['column2'], item['column3']]
    rows.append(row)

table = Table(len(rows)+1, 3)  # Create a new table with the correct number of rows and columns
table.cell(0, 0).text = 'Column 1'  # Add the column headers to the first row
table.cell(0, 1).text = 'Column 2'
table.cell(0, 2).text = 'Column 3'
for i, row in enumerate(rows):
    table.cell(i+1, 0).text = row[0]  # Add the row data to the table
    table.cell(i+1, 1).text = row[1]
    table.cell(i+1, 2).text = row[2]

# Code required here to place the table in the word template at a specific placeholder location.

doc.save('output.docx')


from python-docx-template: Construct Word table in python and place into a specific location in a Word template

Displaying a dropdownmenu at the top end of the composable screen

I have the following where I want to display the dropdownmenu on the left and I want to display on the right.

enter image description here

I have the following code, that uses a Scaffold and in that I have a topBar with a column which I have my AgendaTopBar and the AgendaDropDownMenu below that which is align(Alignment.End)

 Scaffold(
        modifier = modifier,
        topBar = {
            Column(modifier = Modifier.fillMaxWidth()) {
                AgendaTopBar(
                    modifier = Modifier
                        .fillMaxWidth()
                        .background(color = MaterialTheme.colorScheme.backgroundBackColor)
                        .padding(start = 16.dp, end = 16.dp, top = 8.dp, bottom = 8.dp),
                    initials = agendaScreenState.usersInitials,
                    displayMonth = agendaScreenState.selectedDate.month.toString(),
                    onProfileButtonClicked = {
                        agendaScreenEvent(AgendaScreenEvent.OnOpenLogoutDropDownMenu(shouldOpen = true))
                    },
                    onDateClicked = {
                        calendarState.show()
                    },
                )

                AgendaDropDownMenu(
                    modifier = Modifier
                        .background(color = MaterialTheme.colorScheme.dropDownMenuBackgroundColor)
                        .align(Alignment.End),
                    shouldOpenDropdown = agendaScreenState.shouldOpenLogoutDropDownMenu,
                    onCloseDropdown = {
                        agendaScreenEvent(
                            AgendaScreenEvent.OnChangedShowDropdownStatus(shouldOpen = false)
                        )
                    },
                    listOfMenuItemId = listOf(me.androidbox.presentation.R.string.logout),
                    onSelectedOption = { _ ->
                        agendaScreenEvent(AgendaScreenEvent.OnOpenLogoutDropDownMenu(shouldOpen = false))
                        onLogout()
                    }
                )
            }
        },
        floatingActionButton = { 


from Displaying a dropdownmenu at the top end of the composable screen

setImmediate vs. nextTick

Node.js version 0.10 was released today and introduced setImmediate. The API changes documentation suggests using it when doing recursive nextTick calls.

From what MDN says it seems very similar to process.nextTick.

When should I use nextTick and when should I use setImmediate?



from setImmediate vs. nextTick

Adversarial attack with defense mechanism for ASR

I have studied adversarial attacks on the Automatic Speech Recognition(ASR) system. I want to implement some attacks with corresponding defense mechanisms on the ASR system.

For this, I need already implemented code on ASR. Can you help me with that?



from Adversarial attack with defense mechanism for ASR

Friday, 28 April 2023

Django using prefetch_related to reduce queries

I am trying to understand how I can improve the following query:

class PDFUploadRequestViewSet(viewsets.ModelViewSet):

    def get_queryset(self):
        project_id = self.request.META.get('HTTP_PROJECT_ID', None)
        if project_id:
            return PDFUploadRequest.objects.filter(project_id=project_id)
        else:
            return PDFUploadRequest.objects.all()

    def get_serializer_class(self):
        if self.action == 'list':
            return PDFUploadRequestListSerializer
        else:
            return self.serializer_class

The issue is that the more PDFPageImage objects are in the DB then it creates separate query for each of them thus slowing down the request. If there is only one value if PDFPageImage related to given PDFUploadRequest then its pretty fast, but for each additional value it is producing extra query and after doing some research I found out that prefetch_related might somehow help with this, but I have not been able to figure out how to use it with my models.

This is how the model for PDFUploadRequest looks like:

class PDFUploadRequest(models.Model, BaseStatusClass):
    id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
    file = models.FileField(upload_to='uploaded_pdf')
    file_name = models.CharField(max_length=255)
    status = models.CharField(
        max_length=50,
        choices=BaseStatusClass.PDF_STATUS_CHOICES,
        default=BaseStatusClass.UPLOADED,
    )
    completed = models.DateTimeField(null=True)
    processing_started = models.DateTimeField(null=True)
    text = models.TextField(default=None, null=True, blank=True)
    owner = models.ForeignKey(User, related_name='pdf_requests', on_delete=models.PROTECT, null=True, default=None)
    project = models.ForeignKey(Project, related_name='pdf_requests', on_delete=models.PROTECT, null=True, default=None)

    class Meta:
        ordering = ['-created']
    def no_of_pages(self):
        return self.pdf_page_images.count()
    def time_taken(self):
        if self.completed and self.processing_started:
            return self.completed - self.processing_started

And this is the related model that I think is causing issues:

class PDFPageImage(models.Model, BaseStatusClass):
    id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
    pdf_request = models.ForeignKey(PDFUploadRequest, related_name="pdf_page_images", on_delete=models.CASCADE)
    image = models.ImageField()
    status = models.CharField(
        max_length=50,
        choices=BaseStatusClass.PDF_STATUS_CHOICES,
        default=BaseStatusClass.UPLOADED,
    )
    page_number = models.IntegerField(null=True, blank=True, default=None)
   
    class Meta:
        ordering = ['page_number']
        constraints = [
            models.UniqueConstraint(fields=['pdf_request', 'page_number'],
                                    condition=models.Q(deleted=False),
                                    name='pdf_request_and_page_number_unique')
        ]

Here is the serializer:

class PDFUploadRequestSerializer(serializers.ModelSerializer):

    pdf_page_images = PDFPageImageSerializer(many=True, read_only=True)


    class Meta:
        model = PDFUploadRequest
        fields = ('id', 'file','file_name', 'status', 'pdf_page_images',
                  , 'owner', 'project')
        read_only_fields = ('file_name', 'pdf_page_images', 'text',
                           'owner', 'project')

I have tried using prefetch_related on the PDFPageImage model:

PDFUploadRequest.objects.filter(project_id=project_id).prefetch_related("pdf_page_images")

But I dont think it is doing anything. Any idea what can I do to reduce the query times here?



from Django using prefetch_related to reduce queries

Maximize View.OnTouchListener sample rate?

When a View.OnTouchListener() is receiving MotionEvents (including historical ones that occurred since the last call to onTouch()), does Android (13) capture samples at the highest rate the phone's hardware is capable of, or does it normally limit the sample rate, and only kick it up to max if you somehow explicitly ask it to (kind of like a gaming mouse that's capable of delivering 1000hz+ sample rates when requested, but normally limits itself to 125 or 250hz under Windows to improve battery life).

Along a similar line... is View.OnTouchListener() the way to get touchscreen samples at the absolute maximum rate the hardware allows, or is there some new/alternate API (probably intended for gaming) that exposes higher sample rates than the phone would normally be inclined to use?

If there is a way (available as of Android 13 on a Pixel 7 Pro) to sample the touchscreen at a faster rate than View.OnTouchListener() normally allows/defaults-to, what is it?

Note that I'm not asking, "How can I make onTouch() get called more frequently". The current reporting rate and historical batching is fine. I just want the sample-to-sample time of those batches to be as low as the hardware & Android conceivably allows.



from Maximize View.OnTouchListener sample rate?

MoviePy - getting progress bar values

I am running a python script which converts a video file to a audio clip using moviepy.

def convert(mp3_file,mp4_file):

    videoclip = VideoFileClip(mp4_file)
    audioclip = videoclip.audio
    audioclip.write_audiofile(mp3_file)
    audioclip.close()
    videoclip.close()

I found out that Moviepy uses a library called Proglog to print a command line progress bar.

How do I get these process completion percentage values?



from MoviePy - getting progress bar values

How to remove space/ curve/ cradle among FAB and BottomAppBar?

I have the following XML code to form relation between BottomAppBar and FloatingActionButton.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:orientation="vertical"
    android:id="@+id/container"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

    <fragment
        android:id="@+id/nav_host_fragment_activity_main"
        android:name="androidx.navigation.fragment.NavHostFragment"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        android:layout_weight="1"
        app:defaultNavHost="true"
        app:navGraph="@navigation/mobile_navigation" />

    <androidx.coordinatorlayout.widget.CoordinatorLayout
        android:id="@+id/coordinator_layout"
        android:layout_width="match_parent"
        android:layout_height="wrap_content">

        <!--
        Workaround: android:layout_gravity="bottom" is required so that bottom bar stays at bottom
        most.
        -->
        <com.google.android.material.bottomappbar.BottomAppBar
            android:layout_gravity="bottom"
            
            android:backgroundTint="#ffff00"
            
            android:id="@+id/bottom_app_bar"
            android:layout_width="match_parent"
            android:layout_height="wrap_content">
            <!--
            Workaround: android:layout_marginEnd="16dp" is required to balance the mystery
            marginStart.
            -->
            <!--
            Workaround: app:elevation="0dp" and app:backgroundTint="@android:color/transparent" is
            used due to the following reason :-
            https://stackoverflow.com/questions/63518075/android-problem-in-transparent-bottom-navigation-inside-the-bottom-app-bar
            -->
            <com.google.android.material.bottomnavigation.BottomNavigationView
                android:layout_marginEnd="16dp"
                app:elevation="0dp"
                app:backgroundTint="@android:color/transparent"

                android:id="@+id/nav_view"
                android:layout_width="match_parent"
                android:layout_height="wrap_content"
                app:menu="@menu/bottom_nav_menu" />
        </com.google.android.material.bottomappbar.BottomAppBar>

        <com.google.android.material.floatingactionbutton.FloatingActionButton
            android:id="@+id/fab"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:contentDescription="@string/app_name"
            android:src="@drawable/ic_android_black_24dp"
            app:layout_anchor="@id/bottom_app_bar" />
    </androidx.coordinatorlayout.widget.CoordinatorLayout>
    
</LinearLayout>

This is what we are getting.

enter image description here

However, this isn't exactly what we wish for. We wish to remove the space/ curve/ cradle among BottomAppBar and FloatingActionButton.

We tried to place the following attribute in BottomAppBar

app:fabAnchorMode="embed"

The space/ curve/ cradle are no longer seen. However, the position of FloatingActionButton also moved downward, which is not we wish for too.

enter image description here


How can we remove space/ curve/ cradle among BottomAppBar and FloatingActionButton, yet make the FloatingActionButton position stay slightly above BottomAppBar?



from How to remove space/ curve/ cradle among FAB and BottomAppBar?

How to keep many Android/Flutter tools up to date

I am using Flutter to develop Android app.

There is a large number of tools needed to get this to work. Each of these tools has a version number you have to specify. I am struggling to find a place that explains what version should be used. Occasionally, the build fails when I update to a newer version of the tool, but who knows what other things are affected that are not an outright build failure.

Can someone help explain how to keep the project up to date with the correct recent versions?

These are the tools/frameworks I am talking about

Gradle

This is very confusing. There is the gradle itself inside flutter and gradle build tools (also referred to as gradle plugin).

Gradle version is defined in gradle-wrapper.properties in android\gradle\wrapper\ folder. I have upgraded this to version 7.5.1.

distributionUrl=https\://services.gradle.org/distributions/gradle-7.5.1-all.zip

There are more recent versions here: gradle.org/releases/

There is some compatibility info here: docs.gradle.org/compatibility and here: developer.android.com/gradle-plugin but nothing for Flutter specifically.

Going by these documents, seems that Android Studio recommends using gradle version 7.5 (latest patch atm is 7.5.1) with gradle plugin 7.4 (latest patch at the time of writing is 7.4.2 as per developer.android.com/gradle-plugin

Plugin/build tools version is defined in android/gradle.build config file (there is another one in android/app folder):

 dependencies {
        classpath 'com.android.tools.build:gradle:7.4.2'

Flutter has a default gradle build tools version in its configuration file which is referring to version 4 which is very old in 2023. Not sure why is flutter default using such an old version? Perhaps it doesn't matter, but usually you get some benefits from upgrading otherwise why bother?

..\flutter\packages\flutter_tools\gradle\flutter.gradle: classpath 'com.android.tools.build:gradle:4.1.0'

Kotlin

Not sure why both Kotlin and Java are used. But this somehow has to play with Gradle. Through trial an error I have arrived at version 1.8.10 which works with Gradle build tools 7.4.2 and Gradle 7.5.1

This is in android/build.gradle file:

ext.kotlin_version = '1.8.10'

Java

There is a lot of confusion about this, especially with the recent version of Android Studio moving java jre from jre to jbr folder deep in the guts of the Android Studio installation path which caused flutter doctor to freak out and builds to fail.

There is also this (below) in gradle.build file, some documentation refers to version 1.6, I am using 1.8. This document mentions 1.8 a lot, but also version 2 to be used with Gralde 7.4: developer.android.com/java8-support. Not sure if I should upgrade?

compileOptions {
       sourceCompatibility JavaVersion.VERSION_1_8
       targetCompatibility JavaVersion.VERSION_1_8

}

// For Kotlin projects
kotlinOptions {
   jvmTarget = "1.8"
}

There is also java SDK (JDK). Some docs say to use version 11, gradle supports up to version 19.

I am using openjdk version "11.0.16.1" 2022-08-12 LTS. Not sure if there is benefit upgrading to a later version?

In appName_android.iml file there is this, not sure what this means and if it ever needs to be upgraded:

<module type="JAVA_MODULE" version="4">

NDK

This tools is installed and patched through Android Studio.

Flutter refers to it in its config files, I am overriding it in android/app/gradle.build config file and pointing to the latest version installed using Android Studio SDK manager:

android {
    compileSdkVersion 33 //flutter.compileSdkVersion
    ndkVersion '25.2.9519653'

Found this compatibility info from Android Studio: developer.android.com/default-ndk-per-agp This is pointing to NDK version 23.1.7779620 for gradle plugin 7.4.

This is version is also used in this flutter config file: ..\flutter\packages\flutter_tools\gradle\flutter.gradle

Flutter Build Tools

This is supposed to be configured in local properties file, but I ended up adding it to android/app/gradle.build:

def flutterBuildToolsVersion = localProperties.getProperty('flutter.buildToolsVersion')
if (flutterBuildToolsVersion == null) {
    flutterBuildToolsVersion = '33.0.2'
}

Has to do with target version of Android? Some docs refer to version 30.0.3, I am setting it to match the latest version in Android Studio.

AndroidX work runtime ktx

Ran into this one at some point due to a build error. Added it to android/build.gradle dependencies

implementation 'androidx.work:work-runtime-ktx:2.8.1'

Not sure if it even needs updating?

Multidex

Ran into this one due to a build failure after removing x86 from release ndk abiFilters. Adding these options to build.gradle resolved the issue. Not sure what this is about and if we need to update this?

multiDexEnabled = true
implementation("androidx.multidex:multidex:2.0.1")

Any help in how to keep all these up to date and working together would be appreciated. I'd appreciate some guidelines how to keep all this up to date going forward, not just the versions that should be used at the moment...



from How to keep many Android/Flutter tools up to date

Multi Thread execution for webscrapping with Selenium throwing errors - Python

I have around 30k license numbers that I want to search from a website and extract all the relevant information from it When I tried the extracting the information from the function below by looping through multiple license_nums the code works fine and gives me what I am looking for

# create a UserAgent object to generate random user agents
user_agent = UserAgent()

# create a ChromeOptions object to set the user agent in the browser header
chrome_options = Options()
chrome_options.add_argument(f'user-agent={user_agent.random}')
chrome_options.add_argument("start-maximized")

# create a webdriver instance with the ChromeOptions object
driver = webdriver.Chrome(options=chrome_options,executable_path=r'C:\WebDrivers\ChromeDriver\chromedriver_win32\chromedriver.exe')

driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'})
print(driver.execute_script("return navigator.userAgent;"))

form_url = "https://cdicloud.insurance.ca.gov/cal/LicenseNumberSearch?handler=Search"
driver.get(form_url)

license_num = ['0726675', '0747600', '0691046', '0D95524', '0E77989', '0L78427']

def get_license_info(license):
    if license not in license_num:
        return pd.DataFrame()
    df_license = []
    search_box = driver.find_element('id','SearchLicenseNumber').send_keys(license)
    time.sleep(randint(15,100))
    WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "btnSearch"))).click()
    page_source = driver.page_source
    soup = BeautifulSoup(page_source, "html.parser")

    table = soup.find('table', id='searchResult')
    license_name = []
    license_number =[]

    #extract all license names on the page
    # Collecting Ddata
    for row in table.tbody.find_all('tr'):    
        # Find all data for each column
        columns = row.find_all('td')

        if(columns != []):
            l_name = columns[0].text.strip().replace("\t"," ")
            license_name.append(l_name)
            license_number.append(columns[1].text.strip())
            print(l_name)
    for row in range(0, len(license_name)):      
            first_page_handle = driver.current_window_handle
            time.sleep(5)
    
            WebDriverWait(driver, 40).until(EC.element_to_be_clickable((By.XPATH, f"//table[@id='searchResult']/tbody/tr[{row+1}]/td[2]/a"))).click()
            try:
                driver.switch_to.window(driver.window_handles[1])
                html = driver.page_source
                soup = BeautifulSoup(html, "lxml")
                #Grab license type and Expiration date
                table_l = soup.find('table', id='licenseDetailGrid')
                data = []
                for tr in table_l.find_all('tr'):
                    row = [td.text for td in tr.find_all('td')]
                    data.append(row)
                df1 = pd.DataFrame(data, columns=['license_type','original_issue_date','status','status_date','exp_date'])
                time.sleep(5)
                business = soup.find("div",id="collapse-LicenseDetailSection").extract()
                b_list = list(business.stripped_strings)
                df_final = df1[df1['license_type'].str.contains("Accident",na=False)]
                df_final = df_final.assign(license_type=df_final['license_type'].str.extract('(.*)\n'))
                df_final['license_name'] = l_name
                df_final['license_number'] = license
                df_license.append(df_final)
                driver.close()
                driver.switch_to.window(first_page_handle)
            except NoSuchWindowException:
                    print("Window closed, skipping to next license")


    driver.find_element('id','SearchLicenseNumber').clear()
    time.sleep(5)

    return pd.concat(df_license)

when I try to put it run with multi thread it doesn't show the value in the search field and throws error

approach 1 from (Scraping multiple webpages at once with Selenium)

with futures.ThreadPoolExecutor() as executor:     
    # store the url for each thread as a dict, so we can know which thread fails
    future_results = {license: executor.submit(get_license_info, license) for license in license_num}
    
    for license, future in future_results.items(): 
        try:
            df_license = pd.concat([f.result() for f in future_results.values()])
        except Exception as exc:
            print('An exception occurred: {}'.format(exc))

approach 2 from (How to run `selenium-chromedriver` in multiple threads)

start_time = time.time()    
threads = [] 
for license in license_num: # each thread could be like a new 'click' 
    th = threading.Thread(target=get_license_info, args=(license,))    
    th.start() # could `time.sleep` between 'clicks' to see whats'up without headless option
    threads.append(th)        
for th in threads:
    th.join() # Main thread wait for threads finish
print("multiple threads took ", (time.time() - start_time), " seconds")

Can anybody help me with this. Thank you in advance



from Multi Thread execution for webscrapping with Selenium throwing errors - Python

Computing a norm in a loop slows down the computation with Dask

I was trying to implement a conjugate gradient algorithm using Dask (for didactic purposes) when I realized that the performance were way worst that a simple numpy implementation. After a few experiments, I have been able to reduce the problem to the following snippet:

import numpy as np
import dask.array as da
from time import time


def test_operator(f, test_vector, library=np):
    for n in (10, 20, 30):
        v = test_vector()

        start_time = time()
        for i in range(n):
            v = f(v)
            k = library.linalg.norm(v)
    
            try:
                k = k.compute()
            except AttributeError:
                pass
            print(k)
        end_time = time()

        print('Time for {} iterations: {}'.format(n, end_time - start_time))

print('NUMPY!')
test_operator(
    lambda x: x + x,
    lambda: np.random.rand(4_000, 4_000)
)

print('DASK!')
test_operator(
    lambda x: x + x,
    lambda: da.from_array(np.random.rand(4_000, 4_000), chunks=(2_000, 2_000)),
    da
)

In the code, I simply multiply by 2 a vector (this is what f does) and print its norm. When running with dask, each iteration slows down a little bit more. This problem does not happen if I do not compute k, the norm of v.

Unfortunately, in my case, that k is the norm of the residual that I use to stop the conjugate gradient algorithm. How can I avoid this problem? And why does it happen?

Thank you!



from Computing a norm in a loop slows down the computation with Dask

Thursday, 27 April 2023

Pytorchvideo Models Resnet Input shape

I am using the following code to load resnet50 but since this is a video. I am not sure what is the expected input. Is it ([batch_size, channels, frames,img1,img2])?

Any help would be fantastic.

import pytorchvideo.models.resnet

def resnet():
  return pytorchvideo.models.resnet.create_resnet(
      input_channel=3,     # RGB input from Kinetics
      model_depth=50,      # For the tutorial let's just use a 50 layer network
      model_num_class=400, # Kinetics has 400 classes so we need out final head to align
      norm=nn.BatchNorm3d,
      activation=nn.ReLU,
  )


from Pytorchvideo Models Resnet Input shape

Remove pincushion lens distortion in Python

I have the following image which is computer generated

enter image description here

It is fed as an input to an optical experiment results in the following image:

enter image description here

As you can tell the image has a double concave effect due to the lens system being used.

I need to be able to restore the image without distortion and compare it with the original image. I'm new to image processing and I came across two useful python packages:

https://pypi.org/project/defisheye/

The defisheye was quite straight forward for me to use (script below) but i'm not able to achieve the optimal result so far.

from defisheye import Defisheye

dtype = 'linear'
format = 'fullframe'
fov = 11
pfov = 10

img = "input_distorted.jpg"
img_out = "input_distorted_corrected.jpg"

obj = Defisheye(img, dtype=dtype, format=format, fov=fov, pfov=pfov)

# To save image locally 
obj.convert(outfile=img_out)

Seconly from opencv: https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html The camera calibration tutorial is way out of my knowledge. If someone could assure me if that's the way to go i can start digging in deeper. Really appreciate any suggestions.



from Remove pincushion lens distortion in Python

Injecting JavaScript variables to custom Wordpress block editor

I am creating a few custom blocks in a theme, and I have a problem. Sometimes I have a need to pass some custom data to my blocks in the blocks editor.

In other words, I need data to be available in the blocks/myBlock/index.js (the editor script).

I register my blocks, with the recommended new way, register_block_type(__DIR__ . '/build/blocks/myBlock');, that basically loads the block.json file that then registers all the editor, frontend and render scripts defined there.

In my case it is composed of:

"editorScript": "file:./index.js",
  "style": [
    "file:./style.css"
  ],
  "render": "file:./render.php"

One would think I could use the function wp_add_inline_script in the hook admin_enqueue_scripts, but it does not seem to work. Hook is triggered, but no inline scripts are added. My best guess after some investigation is that block-scripts are loaded too early, and the wp_add_inline_script is triggered after script already has been loaded or something, according to comments in official documentation; https://developer.wordpress.org/reference/functions/wp_add_inline_script/#comment-5828

Example:

add_action('admin_enqueue_scripts', function () {
    wp_add_inline_script('myBlock-editor-script-js', 'window.myBlockConfig = ' . json_encode(array(
        'themeDir' => THEME_DIR,
        'themeUrl' => THEME_URL,
        'themeName' => THEME_NAME,
        'themeVersion' => THEME_VERSION,
    )), 'before');
});

And even brute-forcing in the scripts using admin_head-hook, as comment suggested even though it used wp_footer as example, does not seem to work either. I can then see my inline script loaded, but it is loaded after block-editor-script and by then none of the data made accessible via inlien script is reachable.

Example:

add_action('admin_head', function () {
    echo '<script>window.myBlockConfig = ' . json_encode(array(
     'themeDir' => THEME_DIR,
     'themeUrl' => THEME_URL,
     'themeName' => THEME_NAME,
     'themeVersion' => THEME_VERSION,
    )) . '</script>';
});

Inline script loaded after scripts that do need the data

So what would be the "correct" way to do this?

UPDATE:

Only way I've found to solve this is using Wordpress REST API, eg.

function myBlockRestApiGetConfig($request)
{
    $response = array(
      'themeDir' => THEME_DIR,
      'themeURL' => THEME_URL,
      'themeName' => THEME_NAME,
      'themeVersion' => THEME_VERSION
    );

    return rest_ensure_response($response);
}

add_action('rest_api_init', function () {
    register_rest_route('myBlock/v1', '/config', array(
      'methods' => 'GET',
      'callback' => 'myBlockRestApiGetConfig',
    ));
});

And then in my blocks editor script I can fetch it;

const config = await apiFetch({
   path: `/myBlock/v1/config`,
});

But still question is; what would be the "correct" way to do this? Maybe it is better to use the API? React backend is very API centric so it makes sense, but "preloading config" makes it faster. So it is pro/con I guess.

I still find it strange that it seems impossible to any hooks to load any script-tags before blocks.

Thank you for your time :-)



from Injecting JavaScript variables to custom Wordpress block editor

Calculating visible area in infinite webpage

I'm using @panzoom/panzoom package for creating infinite webpage where I have items placed on it. When user pans I need to lazyload items based on their x and y coordinates. To achieve lazyload of items I need to calculate visible area on the screen.

Working example: Codesandbox

const lazyload = e => {
    if (!panzoomInstance || !panzoomElem) return false;

    const scale = panzoomInstance.getScale();
    const pan = panzoomInstance.getPan();
    const { width, height } = document
      .getElementById("container")
      .getBoundingClientRect();

    const x1 = (0 - pan.x) / scale;
    const y1 = (0 - pan.y) / scale;
    const x2 = (width - pan.x) / scale;
    const y2 = (height - pan.y) / scale;

    const visibleArea = {
      x1: Math.floor(x1 * scale),
      y1: Math.floor(y1 * scale),
      x2: Math.floor(x2 * scale),
      y2: Math.floor(y2 * scale)
    };

    const itemsToLoad = items.filter(
      i =>
        i.x >= visibleArea.x1 &&
        i.x <= visibleArea.x2 &&
        i.y >= visibleArea.y1 &&
        i.y <= visibleArea.y2
    );

    console.log(`scale ${scale}`);
    console.log(`pan ${JSON.stringify(pan)}`);
    console.log("visibleArea", visibleArea);

    itemsToLoad.map(i => console.log(i.x, i.y));

    console.log(`found ${itemsToLoad.length} \n\n`);
};

Issue: Above calculation for getting visible area works fine when the scale is >= 1. Anything less than one will result in wrong calculation meaning the x and y coords I'm getting does not cover entire visible area.



from Calculating visible area in infinite webpage

Wednesday, 26 April 2023

Why is Mace4 valuation not working in my Python program?

Here is a very simple example of the use of Mace4, taken directly from the NLTK Web site:

from nltk.sem import Expression
from nltk.inference import MaceCommand

read_expr = Expression.fromstring
a = read_expr('(see(mary,john) & -(mary = john))')
mb = MaceCommand(assumptions=[a])
mb.build_model()
print(mb.valuation)

When instead of mb.valuation I print the return value of mb.build_model() I get True so the model is correctly built. But when I ask print(mb.valuation)I get

Traceback (most recent call last):
  File "test2.py", line 8, in <module>
    print(mb.valuation)
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 51, in valuation
    return mbc.model("valuation")
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/api.py", line 355, in model
    return self._decorate_model(self._model, format)
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 185, in _decorate_model
    return self._convert2val(valuation_str)
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 60, in _convert2val
    valuation_standard_format = self._transform_output(valuation_str, "standard")
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 206, in _transform_output
    return self._call_interpformat(valuation_str, [format])[0]
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 220, in _call_interpformat
    self._interpformat_bin = self._modelbuilder._find_binary(
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/prover9.py", line 177, in _find_binary
    return nltk.internals.find_binary(
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 675, in find_binary
    return next(
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 661, in find_binary_iter
    yield from find_file_iter(
  File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 620, in find_file_iter
    raise LookupError(f"\n\n{div}\n{msg}\n{div}")
LookupError: 

===========================================================================
NLTK was unable to find the interpformat file!
Use software specific configuration parameters or set the PROVER9 environment variable.

  Searched in:
    - /usr/local/bin/prover9
    - /usr/local/bin/prover9/bin
    - /usr/local/bin
    - /usr/bin
    - /usr/local/prover9
    - /usr/local/share/prover9

  For more information on interpformat, see:
    <https://www.cs.unm.edu/~mccune/prover9/>
===========================================================================

I installed Prover9 through brew install and everything went fine. Am I doing something wrong?



from Why is Mace4 valuation not working in my Python program?

Convert character to ASCII code in JavaScript

How can I convert a character to its ASCII code using JavaScript?

For example:

get 10 from "\n".



from Convert character to ASCII code in JavaScript

How create link with text and paste it to telegram desktop

Im trying create "copy" button for using clipboard in different apps.

I'm expect that:

  1. ctrl+v in simple text editor will create plain text
  2. ctrl+v in RTF (or in app with "link" support) will create link

Here is a simplified code example:

const aElement = document.createElement('a');
aElement.href = 'https://stackoverflow.com/';
aElement.innerText = 'stackoverflow link';

const data = [
 new ClipboardItem({
  'text/plain': new Blob([aElement.innerText], {type: 'text/plain'}),
  'text/html': new Blob([aElement.outerHTML], {type: 'text/html'}),
 })
];

navigator.clipboard.write(data);

The example works fine everywhere except the telegram desktop app. I have tried every variation of Blob and ClipboardItem options are known to me.

Also I have tried copying and pasting links created in the telegram desktop and they paste as links everywhere! Structure that I see when I copy the link from the telegram desktop is similar to mine

Here is a simplified debug example:

document.body.onclick = () => window.navigator.clipboard.read()
 .then(r => r[0])
 .then(r => r.types.map(t => r.getType(t).then(b => b.text())))
 .then(r => Promise.all(r))
 .then(p => console.log(p))

What am I doing wrong?



from How create link with text and paste it to telegram desktop

How to avoid positioning ChartJS's tooltip on specific datasets on a chart?

I'm using ChartJS to display stripes (ranges) and inside each stripe I'm display a line of the median value. I'm hiding the stripes' presence in both the legend and tooltip lists. That's done by adding this string in the dataset label: "HIDE!" and then just filtering out those items through the tooltip and legend options.

When I move the mouse over the chart, I've configured the tooltip to be positioned on the nearest vector, but that should only be hapenning on the line datasets, and not on the stripe datasets. Examples below:

Tooltip using a stripe as nearest vector (this is NOT ok):

problem

Tooltip using a line as nearest vector (this is ok):

tooltip on point

So the question is: How can I avoid positioning the tooltip tick on the stripes' vectors, and only use the lines' vectors for that? It seems that the filter option only hides the item from the legend list itself, but not from the graphic when it comes to position the tooltip div.

Here's my current code (you can check the jsfiddle as well):

<canvas id="rainbow"></canvas>
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.2.1/dist/chart.umd.min.js"></script>
<script>
var chart;
var aspectRatio = 2;
if(window.innerWidth < 600) {
    aspectRatio = 1;
}
var chartdata = {
    type: "line",
    data: {
        //months
        labels: ["0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30","31","32","33","34","35","36","37","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58","59","60"],
        datasets : [

            //##################################################

            {
                //tracking (white dotted line)
                type: 'line',
                data: [50.9,54.6,,62,,,68.3,68.9,70.5,73,74.4,,,77.5,79.7,,,82.4,,,85.7,,,88.9,90.6,90.1,,92,93,,,95.6,96.4,,99.1,98.9,,,,,,103.4,104.2,,,,,108.7,,,110.7,111.3,,111.6,,,113.5,114.1,115.7,116.3,],
                label: "name",
                borderColor: "#ffffff",
                backgroundColor: "#ffffff",
                pointRadius: 0,
                pointHitRadius: 1,
                borderWidth: 3,
                spanGaps: true,
                borderDash: [10,5]
            },

            //##################################################

            {
                //3 SD line [7]
                data: [54.7,59.5,63.2,66.1,68.6,70.7,72.5,74.2,75.8,77.4,78.9,80.3,81.7,83.1,84.4,85.7,87.0,88.2,89.4,90.6,91.7,92.9,94.0,95.0,95.8,96.4,97.4,98.4,99.4,100.3,101.3,102.2,103.1,103.9,104.8,105.6,106.5,107.3,108.1,108.9,109.7,110.5,111.2,112.0,112.7,113.5,114.2,114.9,115.7,116.4,117.1,117.7,118.4,119.1,119.8,120.4,121.1,121.8,122.4,123.1,123.7],
                label: "3 SD",
                backgroundColor: "#8ac4e8",
                borderColor: "#8ac4e8",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },
            {
                //2 SD line [6]
                data: [52.9,57.6,61.1,64.0,66.4,68.5,70.3,71.9,73.5,75.0,76.4,77.8,79.2,80.5,81.7,83.0,84.2,85.4,86.5,87.6,88.7,89.8,90.8,91.9,92.6,93.1,94.1,95.0,96.0,96.9,97.7,98.6,99.4,100.3,101.1,101.9,102.7,103.4,104.2,105.0,105.7,106.4,107.2,107.9,108.6,109.3,110.0,110.7,111.3,112.0,112.7,113.3,114.0,114.6,115.2,115.9,116.5,117.1,117.7,118.3,118.9],
                label: "2 SD",
                backgroundColor: "#80a3e6",
                borderColor: "#80a3e6",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },
            {
                //1 SD line [5]
                data: [51.0,55.6,59.1,61.9,64.3,66.2,68.0,69.6,71.1,72.6,73.9,75.3,76.6,77.8,79.1,80.2,81.4,82.5,83.6,84.7,85.7,86.7,87.7,88.7,89.3,89.9,90.8,91.7,92.5,93.4,94.2,95.0,95.8,96.6,97.4,98.1,98.9,99.6,100.3,101.0,101.7,102.4,103.1,103.8,104.5,105.1,105.8,106.4,107.0,107.7,108.3,108.9,109.5,110.1,110.7,111.3,111.9,112.5,113.0,113.6,114.2],
                label: "1 SD",
                backgroundColor: "#7780e4",
                borderColor: "#7780e4",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },
            {
                //Median [4]
                data: [49.1,53.7,57.1,59.8,62.1,64.0,65.7,67.3,68.7,70.1,71.5,72.8,74.0,75.2,76.4,77.5,78.6,79.7,80.7,81.7,82.7,83.7,84.6,85.5,86.1,86.6,87.4,88.3,89.1,89.9,90.7,91.4,92.2,92.9,93.6,94.4,95.1,95.7,96.4,97.1,97.7,98.4,99.0,99.7,100.3,100.9,101.5,102.1,102.7,103.3,103.9,104.5,105.0,105.6,106.2,106.7,107.3,107.8,108.4,108.9,109.4],
                label: "Median",
                backgroundColor: "#8c77e4",
                borderColor: "#8c77e4",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },
            {
                //-1 SD line [3]
                data: [47.3,51.7,55.0,57.7,59.9,61.8,63.5,65.0,66.4,67.7,69.0,70.3,71.4,72.6,73.7,74.8,75.8,76.8,77.8,78.8,79.7,80.6,81.5,82.3,82.9,83.3,84.1,84.9,85.7,86.4,87.1,87.9,88.6,89.3,89.9,90.6,91.2,91.9,92.5,93.1,93.8,94.4,95.0,95.6,96.2,96.7,97.3,97.9,98.4,99.0,99.5,100.1,100.6,101.1,101.6,102.2,102.7,103.2,103.7,104.2,104.7],
                label: "-1 SD",
                backgroundColor: "#aa78e5",
                borderColor: "#aa78e5",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },
            {
                //-2 SD line [2]
                data: [45.4,49.8,53.0,55.6,57.8,59.6,61.2,62.7,64.0,65.3,66.5,67.7,68.9,70.0,71.0,72.0,73.0,74.0,74.9,75.8,76.7,77.5,78.4,79.2,79.7,80.0,80.8,81.5,82.2,82.9,83.6,84.3,84.9,85.6,86.2,86.8,87.4,88.0,88.6,89.2,89.8,90.4,90.9,91.5,92.0,92.5,93.1,93.6,94.1,94.6,95.1,95.6,96.1,96.6,97.1,97.6,98.1,98.5,99.0,99.5,99.9],
                label: "-2 SD",
                backgroundColor: "#c97be5",
                borderColor: "#c97be5",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },
            {
                //-3 SD line [1]
                data: [43.6,47.8,51.0,53.5,55.6,57.4,58.9,60.3,61.7,62.9,64.1,65.2,66.3,67.3,68.3,69.3,70.2,71.1,72.0,72.8,73.7,74.5,75.2,76.0,76.4,76.8,77.5,78.1,78.8,79.5,80.1,80.7,81.3,81.9,82.5,83.1,83.6,84.2,84.7,85.3,85.8,86.3,86.8,87.4,87.9,88.4,88.9,89.3,89.8,90.3,90.7,91.2,91.7,92.1,92.6,93.0,93.4,93.9,94.3,94.7,95.2],
                label: "-3 SD",
                backgroundColor: "#dc7db7",
                borderColor: "#dc7db7",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 2
            },

            //##################################################

            {
                //3 SD stripe [7]
                data: [55.6,60.45,64.25,67.15,69.7,71.8,73.6,75.35,76.95,78.6,80.15,81.55,82.95,84.4,85.75,87.05,88.4,89.6,90.85,92.1,93.2,94.45,95.6,96.55,97.4,98.05,99.05,100.1,101.1,102,103.1,104,104.95,105.7,106.65,107.45,108.4,109.25,110.05,110.85,111.7,112.55,113.2,114.05,114.75,115.6,116.3,117,117.9,118.6,119.3,119.9,120.6,121.35,122.1,122.65,123.4,124.15,124.75,125.5,126.1],
                label: "HIDE!3 SD",
                borderColor: "#7bb5d9",
                backgroundColor: "#7bb5d9",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 9
            },
            {
                //2 SD stripe [6]
                data: [53.8,58.55,62.15,65.05,67.5,69.6,71.4,73.05,74.65,76.2,77.65,79.05,80.45,81.8,83.05,84.35,85.6,86.8,87.95,89.1,90.2,91.35,92.4,93.45,94.2,94.75,95.75,96.7,97.7,98.6,99.5,100.4,101.25,102.1,102.95,103.75,104.6,105.35,106.15,106.95,107.7,108.45,109.2,109.95,110.65,111.4,112.1,112.8,113.5,114.2,114.9,115.5,116.2,116.85,117.5,118.15,118.8,119.45,120.05,120.7,121.3],
                label: "HIDE!2 SD",
                borderColor: "#7194d7",
                backgroundColor: "#7194d7",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 10
            },
            {
                //1 SD stripe [5]
                data: [51.95,56.6,60.1,62.95,65.35,67.35,69.15,70.75,72.3,73.8,75.15,76.55,77.9,79.15,80.4,81.6,82.8,83.95,85.05,86.15,87.2,88.25,89.25,90.3,90.95,91.5,92.45,93.35,94.25,95.15,95.95,96.8,97.6,98.45,99.25,100,100.8,101.5,102.25,103,103.7,104.4,105.15,105.85,106.55,107.2,107.9,108.55,109.15,109.85,110.5,111.1,111.75,112.35,112.95,113.6,114.2,114.8,115.35,115.95,116.55],
                label: "HIDE!1 SD",
                borderColor: "#6871d5",
                backgroundColor: "#6871d5",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 11
            },

            //##################################################

            {
                //Median stripe [4]
                data: [50.05,54.65,58.1,60.85,63.2,65.1,66.85,68.45,69.9,71.35,72.7,74.05,75.3,76.5,77.75,78.85,80,81.1,82.15,83.2,84.2,85.2,86.15,87.1,87.7,88.25,89.1,90,90.8,91.65,92.45,93.2,94,94.75,95.5,96.25,97,97.65,98.35,99.05,99.7,100.4,101.05,101.75,102.4,103,103.65,104.25,104.85,105.5,106.1,106.7,107.25,107.85,108.45,109,109.6,110.15,110.7,111.25,111.8],
                label: "HIDE!Median",
                borderColor: "#7d68d5",
                backgroundColor: "#7d68d5",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 12
            },

            //##################################################

            {
                //-1 SD stripe [3]
                data: [48.2,52.7,56.05,58.75,61,62.9,64.6,66.15,67.55,68.9,70.25,71.55,72.7,73.9,75.05,76.15,77.2,78.25,79.25,80.25,81.2,82.15,83.05,83.9,84.5,84.95,85.75,86.6,87.4,88.15,88.9,89.65,90.4,91.1,91.75,92.5,93.15,93.8,94.45,95.1,95.75,96.4,97,97.65,98.25,98.8,99.4,100,100.55,101.15,101.7,102.3,102.8,103.35,103.9,104.45,105,105.5,106.05,106.55,107.05],
                label: "HIDE!-1 SD",
                borderColor: "#9b69d6",
                backgroundColor: "#9b69d6",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 13
            },
            {
                //-2 SD stripe [2]
                data: [46.35,50.75,54,56.65,58.85,60.7,62.35,63.85,65.2,66.5,67.75,69,70.15,71.3,72.35,73.4,74.4,75.4,76.35,77.3,78.2,79.05,79.95,80.75,81.3,81.65,82.45,83.2,83.95,84.65,85.35,86.1,86.75,87.45,88.05,88.7,89.3,89.95,90.55,91.15,91.8,92.4,92.95,93.55,94.1,94.6,95.2,95.75,96.25,96.8,97.3,97.85,98.35,98.85,99.35,99.9,100.4,100.85,101.35,101.85,102.3],
                label: "HIDE!-2 SD",
                borderColor: "#ba6cd6",
                backgroundColor: "#ba6cd6",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 14
            },
            {
                //-3 SD stripe [1]
                data: [44.5,48.8,52,54.55,56.7,58.5,60.05,61.5,62.85,64.1,65.3,66.45,67.6,68.65,69.65,70.65,71.6,72.55,73.45,74.3,75.2,76,76.8,77.6,78.05,78.4,79.15,79.8,80.5,81.2,81.85,82.5,83.1,83.75,84.35,84.95,85.5,86.1,86.65,87.25,87.8,88.35,88.85,89.45,89.95,90.45,91,91.45,91.95,92.45,92.9,93.4,93.9,94.35,94.85,95.3,95.75,96.2,96.65,97.1,97.55],
                label: "HIDE!-3 SD",
                borderColor: "#cd6ea8",
                backgroundColor: "#cd6ea8",
                pointRadius: 0,
                pointHitRadius: 0,
                pointHoverRadius: 0,
                borderWidth: 1,
                fill: 15
            },

            //##################################################

            {
                //artificial base
                data: [42.7,46.8,50,52.45,54.5,56.3,57.75,59.1,60.55,61.7,62.9,63.95,65,65.95,66.95,67.95,68.8,69.65,70.55,71.3,72.2,73,73.6,74.4,74.75,75.2,75.85,76.4,77.1,77.8,78.35,78.9,79.5,80.05,80.65,81.25,81.7,82.3,82.75,83.35,83.8,84.25,84.75,85.35,85.85,86.35,86.8,87.15,87.65,88.15,88.5,89,89.5,89.85,90.35,90.7,91.05,91.6,91.95,92.3,92.85],
                label: "HIDE!lower",
                borderColor: "transparent",
                backgroundColor: "transparent",
                pointRadius: 0,
                pointHitRadius: 5,
                borderWidth: 1,
                fill: false
            }
        ]
    },

    //configuration options
    options: {
        animation: true,
        aspectRatio: aspectRatio,
        interaction: {
            intersect: false,
            mode: 'index',
            axis: 'x'
        },
        scales: {
            x: {
                label: "time",
                grid: {
                    display: true
                },

            },
            y: {
                label: "meassure",
                grid: {
                    display: true
                },
                stacked: false,
                beginAtZero: false,
                position: "right",
                ticks: {
                    callback: function(val, index) {
                        return parseInt(val) + ' cm';
                    }
                }
            }
        },
        plugins: {
            tooltip: {
                position: 'nearest',
                caretSize: 7,
                caretPadding: 10,
                filter: function (tooltipItem, data) {
                    if(tooltipItem.dataset.label.includes('HIDE!')) {
                        return false;
                    }
                    return true;
                }
            },
            legend: {
                display: true,
                position: 'top',
                labels: {
                    font: {
                        size: 14,
                        family: 'Inter, "sans-serif"'
                    },
                    filter: function(item, chart) {
                        return !item.text.includes('HIDE!');
                    }
                }
            }
        }
    }
}
const orgdata = JSON.parse(JSON.stringify(chartdata));

//draw chart
document.addEventListener('DOMContentLoaded', function(event) {
    var ctx = document.getElementById('rainbow').getContext('2d');
    chart = new Chart(ctx, chartdata);
});
</script>


from How to avoid positioning ChartJS's tooltip on specific datasets on a chart?

Google Analytics 4 gtm.js not firing

I want to track submissions of Hubspot forms embedded in our contact pages with Google Analytics 4 & Google Tag Manager.​

Following this guide, I've created:

•A new GTM HTML tag called "Website Hubspot Form Submission Tracking" with the custom HTML in step 1 above to fire on two Triggers: the Contact Sales Page (URL path contains contact/sales) and the Contact Support Page (URL path contains contact/support):

<script type="text/javascript">
  window.addEventListener("message", function(event) {
    if(event.data.type === 'hsFormCallback' && event.data.eventName === 'onFormSubmitted') {
      window.dataLayer.push({
        'event': 'hubspot-form-submit',
        'hs-form-guid': event.data.id
      });
    }
  });
</script>

• A new GTM trigger called "Hubspot Form Submitted" with event name hubspot-form-submit that matches the window.dataLayer.push({'event': 'hubspot-form-submit', code added above.

• A new GTM data layer variable called "Hubspot Form GUID" with the variable name hs-form-guid.

• A new GTM event tag, selecting the correct GA4 config variable, with event name website_contact_us_submitted to be triggered with the "Hubspot Form Submitted" trigger.

I've published these GTM changes.

If I select the "Website Hubspot Form Submission Tracking" tag and debug it, and visit our Contact Sales page, I see

Website Hubspot Form Submission Tracking Not Fired

If I open the Tag Not Fired, I see:

enter image description here

From this article, I see that placing the dataLayer declaration after the GTM container will cause the array to be reset, and the recommended fix for this is to be sure the data being sent to the dataLayer is being pushed onto the existing dataLayer using dataLayer.push.

However, the code sending data to the dataLayer in the first article I linked to does just that.

  <script type="text/javascript">
    window.addEventListener("message", function(event) {               
      if(event.data.type === 'hsFormCallback' && event.data.eventName === 'onFormSubmitted') {
        window.dataLayer.push({
          'event': 'hubspot-form-submit',
          'hs-form-guid': event.data.id
        });
      }});
  </script>

Why is the gtm.js event not firing on the Contact Support & Contact Sales pages?

Help appreciated.



from Google Analytics 4 gtm.js not firing

Tuesday, 25 April 2023

Don't change textinputlayout hint color while setting error

I am implementing form using textinputlayout in Android

I don't want to change hint text color while setting error on textinputlayout as per below textinput layout with error enabled

textInputLayout.setErrorEnabled(true)
textInputLayout.setError("this field is required ")

// As of now this code is changing error message and hint color to red. But I don't want to change hint color to red. Only message color should be changed to red

I want to change error hint "nickname" to blue and error message to red color.



from Don't change textinputlayout hint color while setting error

Converting JSON/dict to flatten string with indicator tokens

Given an input like:

{'example_id': 0,
 'query': ' revent 80 cfm',
 'query_id': 0,
 'product_id': 'B000MOO21W',
 'product_locale': 'us',
 'esci_label': 'I',
 'small_version': 0,
 'large_version': 1,
 'split': 'train',
 'product_title': 'Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan',
 'product_description': None,
 'product_bullet_point': 'WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air\nDesigned to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace\nDetachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installation\nThis Panasonic fan has a built-in damper to prevent backdraft, which helps to prevent outside air from coming through the fan\n0.35 amp',
 'product_brand': 'Panasonic',
 'product_color': 'White'}

The goal is to output something that looks like:

Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan [TITLE] Panasonic [BRAND] White [COLOR] WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air [SEP] Designed to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace [SEP] Detachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installation [SEP] This Panasonic fan has a built-in damper to prevent backdraft, which helps to prevent outside air from coming through the fan [SEP] 0.35 amp [BULLETPOINT]

There's a few operations going on to generate the desired output following the rules:

  • If the values in the dictionary is None, don't add the content to the output string
  • If the values contains newline \n substitute them with [SEP] tokens
  • Concatenate the strings with in order that user specified, e.g. above follows the order ["product_title", "product_brand", "product_color", "product_bullet_point", "product_description"]

I've tried this that kinda works but the function I've written looks a little to hardcoded to look through the wanted keys and concatenate and manipulate the strings.


item1 = {'example_id': 0,
 'query': ' revent 80 cfm',
 'query_id': 0,
 'product_id': 'B000MOO21W',
 'product_locale': 'us',
 'esci_label': 'I',
 'small_version': 0,
 'large_version': 1,
 'split': 'train',
 'product_title': 'Panasonic FV-20VQ3 WhisperCeiling 190 CFM Ceiling Mounted Fan',
 'product_description': None,
 'product_bullet_point': 'WhisperCeiling fans feature a totally enclosed condenser motor and a double-tapered, dolphin-shaped bladed blower wheel to quietly move air\nDesigned to give you continuous, trouble-free operation for many years thanks in part to its high-quality components and permanently lubricated motors which wear at a slower pace\nDetachable adaptors, firmly secured duct ends, adjustable mounting brackets (up to 26-in), fan/motor units that detach easily from the housing and uncomplicated wiring all lend themselves to user-friendly installation\nThis Panasonic fan has a built-in damper to prevent backdraft, which helps to prevent outside air from coming through the fan\n0.35 amp',
 'product_brand': 'Panasonic',
 'product_color': 'White'}

item2 = {'example_id': 198,
 'query': '# 2 pencils not sharpened',
 'query_id': 6,
 'product_id': 'B08KXRY4DG',
 'product_locale': 'us',
 'esci_label': 'S',
 'small_version': 1,
 'large_version': 1,
 'split': 'train',
 'product_title': 'AHXML#2 HB Wood Cased Graphite Pencils, Pre-Sharpened with Free Erasers, Smooth write for Exams, School, Office, Drawing and Sketching, Pack of 48',
 'product_description': "<b>AHXML#2 HB Wood Cased Graphite Pencils, Pack of 48</b><br><br>Perfect for Beginners experienced graphic designers and professionals, kids Ideal for art supplies, drawing supplies, sketchbook, sketch pad, shading pencil, artist pencil, school supplies. <br><br><b>Package Includes</b><br>- 48 x Sketching Pencil<br> - 1 x Paper Boxed packaging<br><br>Our high quality, hexagonal shape is super lightweight and textured, producing smooth marks that erase well, and do not break off when you're drawing.<br><br><b>If you have any question or suggestion during using, please feel free to contact us.</b>",
 'product_bullet_point': '#2 HB yellow, wood-cased pencils:Box of 48 count. Made from high quality real poplar wood and 100% genuine graphite pencil core. These No 2 pencils come with 100% Non-Toxic latex free pink top erasers.\nPRE-SHARPENED & EASY SHARPENING: All the 48 count pencils are pre-sharpened, ready to use when get it, saving your time of preparing.\nThese writing instruments are hexagonal in shape to ensure a comfortable grip when writing, scribbling, or doodling.\nThey are widely used in daily writhing, sketching, examination, marking, and more, especially for kids and teen writing in classroom and home.#2 HB wood-cased yellow pencils in bulk are ideal choice for school, office and home to maintain daily pencil consumption.\nCustomer service:If you are not satisfied with our product or have any questions, please feel free to contact us.',
 'product_brand': 'AHXML',
 'product_color': None}


def product2str(row, keys):
    key2token = {'product_title': '[TITLE]', 
     'product_brand': '[BRAND]', 
     'product_color': '[COLOR]',
     'product_bullet_point': '[BULLETPOINT]', 
     'product_description': '[DESCRIPTION]'}
    
    output = ""
    for k in keys:
        content = row[k]
        if content:
            output += content.replace('\n', ' [SEP] ') + f" {key2token[k]} "

    return output.strip()

product2str(item2, keys=['product_title', 'product_brand', 'product_color',
                        'product_bullet_point', 'product_description'])

Q: Is there some sort of native CPython JSON to str flatten functions/recipes that can achieve similar results to the product2str function?

Q: Or is there already some function/pipeline in tokenizers library https://pypi.org/project/tokenizers/ that can flatten a JSON/dict into tokens?



from Converting JSON/dict to flatten string with indicator tokens

How to use pipeline for multiple target language translations with M2M model in Huggingface?

The M2M model is trained on ~100 languages and able to translate different languages, e.g.

from transformers import pipeline

m2m100 = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang="de")
m2m100(["hello world", "foo bar"])

[out]:

[{'translation_text': 'Hallo Welt'}, {'translation_text': 'Die Fu Bar'}]

But to enable multiple target translations, user have to initialize multiple pipelines:

from transformers import pipeline

m2m100_en_de = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang="de")

m2m100_en_fr = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang="fr")


print(m2m100_en_de(["hello world", "foo bar"]))
print(m2m100_en_fr(["hello world", "foo bar"]))

[out]:

[{'translation_text': 'Hallo Welt'}, {'translation_text': 'Die Fu Bar'}]
[{'translation_text': 'Bonjour Monde'}, {'translation_text': 'Le bar Fou'}]

Is there a way to use a single pipeline for multiple target languages and/or source languages for the M2M model?

I've tried this:

from transformers import pipeline

m2m100_en_defr = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=["de", "fr"])

print(m2m100_en_defr(["hello world", "foo bar"]))

But it throws the error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/tmp/ipykernel_28/3374873260.py in <module>
      3 m2m100_en_defr = pipeline('translation', 'facebook/m2m100_418M', src_lang='en', tgt_lang=["de", "fr"])
      4 
----> 5 print(m2m100_en_defr(["hello world", "foo bar"]))

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in __call__(self, *args, **kwargs)
    364               token ids of the translation.
    365         """
--> 366         return super().__call__(*args, **kwargs)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in __call__(self, *args, **kwargs)
    163         """
    164 
--> 165         result = super().__call__(*args, **kwargs)
    166         if (
    167             isinstance(args[0], list)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
   1088                     inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params
   1089                 )
-> 1090                 outputs = list(final_iterator)
   1091                 return outputs
   1092             else:

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/pt_utils.py in __next__(self)
    122 
    123         # We're out of items within a batch
--> 124         item = next(self.iterator)
    125         processed = self.infer(item, **self.params)
    126         # We now have a batch of "inferred things".

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/pt_utils.py in __next__(self)
    122 
    123         # We're out of items within a batch
--> 124         item = next(self.iterator)
    125         processed = self.infer(item, **self.params)
    126         # We now have a batch of "inferred things".

/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
    626                 # TODO(https://github.com/pytorch/pytorch/issues/76750)
    627                 self._reset()  # type: ignore[call-arg]
--> 628             data = self._next_data()
    629             self._num_yielded += 1
    630             if self._dataset_kind == _DatasetKind.Iterable and \

/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
    669     def _next_data(self):
    670         index = self._next_index()  # may raise StopIteration
--> 671         data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
    672         if self._pin_memory:
    673             data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)

/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
     56                 data = self.dataset.__getitems__(possibly_batched_index)
     57             else:
---> 58                 data = [self.dataset[idx] for idx in possibly_batched_index]
     59         else:
     60             data = self.dataset[possibly_batched_index]

/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
     56                 data = self.dataset.__getitems__(possibly_batched_index)
     57             else:
---> 58                 data = [self.dataset[idx] for idx in possibly_batched_index]
     59         else:
     60             data = self.dataset[possibly_batched_index]

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/pt_utils.py in __getitem__(self, i)
     17     def __getitem__(self, i):
     18         item = self.dataset[i]
---> 19         processed = self.process(item, **self.params)
     20         return processed
     21 

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in preprocess(self, truncation, src_lang, tgt_lang, *args)
    313         if getattr(self.tokenizer, "_build_translation_inputs", None):
    314             return self.tokenizer._build_translation_inputs(
--> 315                 *args, return_tensors=self.framework, truncation=truncation, src_lang=src_lang, tgt_lang=tgt_lang
    316             )
    317         else:

/opt/conda/lib/python3.7/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py in _build_translation_inputs(self, raw_inputs, src_lang, tgt_lang, **extra_kwargs)
    351         self.src_lang = src_lang
    352         inputs = self(raw_inputs, add_special_tokens=True, **extra_kwargs)
--> 353         tgt_lang_id = self.get_lang_id(tgt_lang)
    354         inputs["forced_bos_token_id"] = tgt_lang_id
    355         return inputs

/opt/conda/lib/python3.7/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py in get_lang_id(self, lang)
    379 
    380     def get_lang_id(self, lang: str) -> int:
--> 381         lang_token = self.get_lang_token(lang)
    382         return self.lang_token_to_id[lang_token]
    383 

/opt/conda/lib/python3.7/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py in get_lang_token(self, lang)
    376 
    377     def get_lang_token(self, lang: str) -> str:
--> 378         return self.lang_code_to_token[lang]
    379 
    380     def get_lang_id(self, lang: str) -> int:

TypeError: unhashable type: 'list'

One would have expected the output to look something like this instead:

{"de": [{'translation_text': 'Hallo Welt'}, {'translation_text': 'Die Fu Bar'}]
 "fr": [{'translation_text': 'Bonjour Monde'}, {'translation_text': 'Le Foo Bar'}]
}

If we use multiple pipelines, are the model mmap and shared? Will it initialize multiple models with multiple tokenizer pairs? Or will it initialize a single model with multiple tokenizers?



from How to use pipeline for multiple target language translations with M2M model in Huggingface?

GraphQL - Apollo printSchema error: TypeError: Cannot read properties of undefined (reading 'kind')

I am trying to use the printSchema function from Apollo to include @directives in the GQL Schema.

However it keeps showing this error:

TypeError: Cannot read properties of undefined (reading 'kind')
    at KnownDirectivesRule (node_modules\graphql\validation\rules\KnownDirectivesRule.js:43
:13)
    at \node_modules\graphql\validation\validate.js:90:12
    at Array.map (<anonymous>)
    at validateSDL (\node_modules\graphql\validation\validate.js:89:24)
    at buildSchemaFromSDL (\node_modules\@apollo\federation\node_modules\@apollo\subgraph\di
st\schema-helper\buildSchemaFromSDL.js:143:47)
    at Object.buildSubgraphSchema (\node_modules\@apollo\federation\node_modules\@apollo\sub
graph\dist\buildSubgraphSchema.js:26:57)
    at file:///finalizeProcessFunction.js:34:37
    at SchemaBuilder.applyHooks (\node_modules\graphile-build\node8plus\SchemaBuilder.js:264
:20)
    at SchemaBuilder.buildSchema (\node_modules\graphile-build\node8plus\SchemaBuilder.js:34
0:33)
    at SchemaBuilder.watchSchema (\node_modules\graphile-build\node8plus\SchemaBuilder.js:40
8:34)

My schema is very long so I don't paste the whole file here, but basically it's just a normal GQL Schema that has multiple type, then Query, Mutation and Subscription.

The directives that I am adding is under Subscription called @aws_subscribe, it is use for AppSync subscription.

So the code that pops this error is this:

let federatedSchema = printSchema.buildSubgraphSchema(schema);

The schema type is GraphQLSchema and it looks like this: enter image description here

What am I missing here?



from GraphQL - Apollo printSchema error: TypeError: Cannot read properties of undefined (reading 'kind')

How to use cross-encoder with Huggingface transformers pipeline?

There're a set of models on huggingface hubs that comes from the sentence_transformers library, e.g. https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1

The suggested usage examples are:

# Using sentence_transformers

from sentence_transformers import CrossEncoder

model_name = 'cross-encoder/mmarco-mMiniLMv2-L12-H384-v1'
model = CrossEncoder(model_name)
scores = model.predict([
  ['How many people live in Berlin?', 'How many people live in Berlin?'], 
  ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
])
scores

[out]:

array([ 0.36782095, -4.2674575 ], dtype=float32)

Or

# From transformers.

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
import torch

# cross-encoder/ms-marco-MiniLM-L-12-v2
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')

features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], 
                     ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'],  
                     padding=True, truncation=True, return_tensors="pt")

model.eval()
with torch.no_grad():
    scores = model(**features).logits
    print(scores)

[out]:

tensor([[10.7615],
        [-8.1277]])

If a user wants to use the transformers.pipeline on these cross-encoder model, it throws an error:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
import torch

# cross-encoder/ms-marco-MiniLM-L-12-v2
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/mmarco-mMiniLMv2-L12-H384-v1')

pipe = pipeline(model=model, tokenizer=tokenizer)

It throws an error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_108/785368641.py in <module>
----> 1 pipe = pipeline(model=model, tokenizer=tokenizer)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
    711         if not isinstance(model, str):
    712             raise RuntimeError(
--> 713                 "Inferring the task automatically requires to check the hub with a model_id defined as a `str`."
    714                 f"{model} is not a valid model_id."
    715             )

RuntimeError: Inferring the task automatically requires to check the hub with a model_id defined as a `str`.

Q: How to use cross-encoder with Huggingface transformers pipeline?

Q: If a model_id is needed, is it possible to add the model_id as an args or kwargs in pipeline?

There's a similar question Error: Inferring the task automatically requires to check the hub with a model_id defined as a `str`. AraBERT model but I'm not sure it's the same issue, since the other question is on 'aubmindlab/bert-base-arabertv02' but not the cross-encoder class of models from sentence_transformers.



from How to use cross-encoder with Huggingface transformers pipeline?

Monday, 24 April 2023

ModalBottomSheet. Dragging by the specific view

I want Dragging to turn on only by dragHandle element. in all other cases, it should be turned off.

xml:

<androidx.coordinatorlayout.widget.CoordinatorLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/mainBottomSheetLayout"
android:background="@color/colorWhite">

<LinearLayout
    android:id="@+id/ModalBottomSheet"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:layout_margin="8dp"
    android:orientation="vertical">

    <com.google.android.material.bottomsheet.BottomSheetDragHandleView
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:id="@+id/dragHandle"/>

    <androidx.recyclerview.widget.RecyclerView
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:layout_marginTop="12dp"
        android:id="@+id/myRecView" />

</LinearLayout>
</androidx.coordinatorlayout.widget.CoordinatorLayout>

initialization behavior:

val behavior: BottomSheetBehavior<*> = (dialog as BottomSheetDialog).behavior
    behavior.state = BottomSheetBehavior.STATE_EXPANDED
    behavior.skipCollapsed = true
   

any ideas? I'm trying to do this with onTouchListener on dragHandle, but without any success.



from ModalBottomSheet. Dragging by the specific view

apply custom color scale to pydeck h3hexagon layer

I am using pydeck to render a visualization using spatial data. I was hoping to use a custom color scale and apply the gradient to the hexagons based on counts.

here's some json data:

[{"count": "12", "hexIds": ["82beeffffffffff"]}, {"count": "35", "hexIds": ["82be77fffffffff"]}, {"count": "51", "hexIds": ["82b917fffffffff"]}, {"count": "32", "hexIds": ["82bf4ffffffffff"]}, {"count": "93", "hexIds": ["82be67fffffffff"]}, {"count": "51", "hexIds": ["82c997fffffffff"]}, {"count": "13", "hexIds": ["82be5ffffffffff"]}, {"count": "11", "hexIds": ["82bed7fffffffff"]}, {"count": "52", "hexIds": ["82be47fffffffff"]}, {"count": "9", "hexIds": ["82c987fffffffff"]}, {"count": "13", "hexIds": ["82b9a7fffffffff"]}, {"count": "26", "hexIds": ["82a737fffffffff"]}, {"count": "38", "hexIds": ["82be8ffffffffff"]}, {"count": "3", "hexIds": ["829d77fffffffff"]}, {"count": "85", "hexIds": ["82be0ffffffffff"]}, {"count": "12", "hexIds": ["82b9b7fffffffff"]}, {"count": "23", "hexIds": ["82be6ffffffffff"]}, {"count": "2", "hexIds": ["82b84ffffffffff"]}, {"count": "6", "hexIds": ["829d4ffffffffff"]}, {"count": "6", "hexIds": ["82b85ffffffffff"]}, {"count": "7", "hexIds": ["82bec7fffffffff"]}, {"count": "32", "hexIds": ["82be57fffffffff"]}, {"count": "2", "hexIds": ["82a7affffffffff"]}, {"count": "30", "hexIds": ["82a727fffffffff"]}, {"count": "6", "hexIds": ["82a787fffffffff"]}, {"count": "21", "hexIds": ["82bee7fffffffff"]}, {"count": "10", "hexIds": ["82b847fffffffff"]}, {"count": "5", "hexIds": ["82a617fffffffff"]}, {"count": "6", "hexIds": ["82a6a7fffffffff"]}, {"count": "7", "hexIds": ["8294effffffffff"]}, {"count": "17", "hexIds": ["82bef7fffffffff"]}, {"count": "1", "hexIds": ["8294e7fffffffff"]}, {"count": "6", "hexIds": ["82a78ffffffffff"]}, {"count": "13", "hexIds": ["82a79ffffffffff"]}, {"count": "3", "hexIds": ["82b877fffffffff"]}, {"count": "5", "hexIds": ["82a797fffffffff"]}, {"count": "28", "hexIds": ["82be4ffffffffff"]}, {"count": "7", "hexIds": ["829487fffffffff"]}, {"count": "4", "hexIds": ["82bedffffffffff"]}, {"count": "2", "hexIds": ["82945ffffffffff"]}, {"count": "10", "hexIds": ["82b997fffffffff"]}, {"count": "4", "hexIds": ["82b9affffffffff"]}, {"count": "9", "hexIds": ["829c27fffffffff"]}, {"count": "16", "hexIds": ["82a707fffffffff"]}, {"count": "3", "hexIds": ["829d07fffffffff"]}, {"count": "8", "hexIds": ["82c9b7fffffffff"]}, {"count": "2", "hexIds": ["8294affffffffff"]}, {"count": "5", "hexIds": ["829d5ffffffffff"]}, {"count": "5", "hexIds": ["829d57fffffffff"]}, {"count": "1", "hexIds": ["82b80ffffffffff"]}, {"count": "11", "hexIds": ["82beaffffffffff"]}, {"count": "2", "hexIds": ["82b8b7fffffffff"]}, {"count": "1", "hexIds": ["829497fffffffff"]}, {"count": "7", "hexIds": ["829d27fffffffff"]}, {"count": "2", "hexIds": ["82a7a7fffffffff"]}, {"count": "6", "hexIds": ["82b887fffffffff"]}, {"count": "7", "hexIds": ["829457fffffffff"]}, {"count": "4", "hexIds": ["82c99ffffffffff"]}, {"count": "2", "hexIds": ["8294cffffffffff"]}, {"count": "4", "hexIds": ["82b88ffffffffff"]}, {"count": "3", "hexIds": ["82b98ffffffffff"]}, {"count": "7", "hexIds": ["82b837fffffffff"]}, {"count": "9", "hexIds": ["829d0ffffffffff"]}, {"count": "2", "hexIds": ["8294c7fffffffff"]}, {"count": "6", "hexIds": ["829d2ffffffffff"]}, {"count": "2", "hexIds": ["829d47fffffffff"]}, {"count": "3", "hexIds": ["82b867fffffffff"]}, {"count": "1", "hexIds": ["82b807fffffffff"]}, {"count": "5", "hexIds": ["82b8a7fffffffff"]}, {"count": "2", "hexIds": ["829d67fffffffff"]}, {"count": "1", "hexIds": ["82a717fffffffff"]}, {"count": "2", "hexIds": ["82b82ffffffffff"]}, {"count": "1", "hexIds": ["829c6ffffffffff"]}, {"count": "2", "hexIds": ["829c2ffffffffff"]}, {"count": "1", "hexIds": ["8294dffffffffff"]}, {"count": "1", "hexIds": ["82d897fffffffff"]}, {"count": "8", "hexIds": ["82b86ffffffffff"]}, {"count": "1", "hexIds": ["82b91ffffffffff"]}, {"count": "3", "hexIds": ["82948ffffffffff"]}, {"count": "3", "hexIds": ["829c4ffffffffff"]}, {"count": "5", "hexIds": ["82b897fffffffff"]}, {"count": "1", "hexIds": ["82b89ffffffffff"]}, {"count": "1", "hexIds": ["829c07fffffffff"]}, {"count": "1", "hexIds": ["82b937fffffffff"]}, {"count": "1", "hexIds": ["82949ffffffffff"]}, {"count": "1", "hexIds": ["82b99ffffffffff"]}, {"count": "1", "hexIds": ["82b987fffffffff"]}, {"count": "1", "hexIds": ["8294d7fffffffff"]}, {"count": "1", "hexIds": ["82b8dffffffffff"]}, {"count": "1", "hexIds": ["829ce7fffffffff"]}, {"count": "15", "hexIds": ["82becffffffffff"]}, {"count": "13", "hexIds": ["82be1ffffffffff"]}, {"count": "1", "hexIds": ["82b827fffffffff"]}]
import pandas as pd
import pydeck
df = pd.read_json('aus_h3.duckgl.json')
h3_layer = pydeck.Layer(
    "H3ClusterLayer",
    df,
    pickable=True,
    stroked=True,
    filled=True,
    extruded=False,
    get_hexagons="hexIds",
    get_fill_color="[255, (1 - count / 500) * 255, 0]",
    get_line_color=[255, 255, 255],
    line_width_min_pixels=2,
)
view_state = pydeck.ViewState(latitude=-25.7773677126431,
                              longitude=135.084939479828,
                              zoom=4,
                              bearing=0,
                              pitch=45)
pydeck.Deck(
    layers=[h3_layer],
    initial_view_state=view_state,
    tooltip={"text": "Density: {count}"}
).to_html("aus_h3.duckgl.html")

How do I specify a custom color scale instead on [255, (1 - count / 500) * 255, 0] in get_fill_color ? For example, I'd like to use 6-class color scale: https://colorbrewer2.org/#type=sequential&scheme=YlOrRd&n=6



from apply custom color scale to pydeck h3hexagon layer