Saturday 31 July 2021

Perform a single task on (220 choose 5) combination in a dataframe

I have data with 220 rows. Initially choose 5 rows randomly and apply an operation to them. Now I have to perform a similar task on (220 choose 5) combination(That means 4102565544 data frames with 5 rows).Python is hitting memory issues when I use list(itertools.combinations(list(range(0,222)),5)) and applying loop on each data frame with 5 rows is too much time-consuming. Below I have attached my data as a dictionary and I have replicated my problem set.

Data

df={'Name': {0: '004737367A89', 1: '006D631822DA', 2: '007FEEEF095D', 3: '015EA8035B5D', 4: '0168C7824FB3', 5: '02236A01C769', 6: '026A35601C28', 7: '03939D273F7D', 8: '05BE3A6A6344', 9: '0735B7F399C8', 10: '075F90DEDAAC', 11: '079D00DB87B6', 12: '08321FDDA475', 13: '084147D3DE00', 14: '08693ADAF466', 15: '08EE69FF7C9B', 16: '0996F835D14B', 17: '0A061E004649', 18: '0BDADD43DF2D', 19: '0D580A803B2C', 20: '11DCF10E0F76', 21: '1241EC5AC73C', 22: '150595F71A7A', 23: '160D7B436114', 24: '1805135DA1B7', 25: '18D26316EA11', 26: '1B744908A7E9', 27: '1CB417508187', 28: '1EA75E92E370', 29: '1F1B4DA40CE4', 30: '209D86760A9C', 31: '228BC53DB280', 32: '235D0F9A5E0E', 33: '2452814BCC90', 34: '2923CA6C88B1', 35: '2CB60EF30BAA', 36: '2CD7BD1FC443', 37: '2D03FAC79D60', 38: '2F34FFA27A7C', 39: '2F8F282FDCEE', 40: '3.03891E+11', 41: '31B4A8BDBA5F', 42: '34EC4E7D8E15', 43: '3695444ADBFF', 44: '370F1D138305', 45: '3826943C86AF', 46: '39F11738A59D', 47: '39F2FF0A2E05', 48: '3A8B6F61E548', 49: '3B256CE48F60', 50: '3C09C2C73655', 51: '3D6858B43366', 52: '3D94154B544C', 53: '3DDD62DDF6C4', 54: '3EBDAFB8E7EE', 55: '408B3D0EAF85', 56: '40ED913F4BB6', 57: '43380E855E4E', 58: '44C8332521DE', 59: '4817047FFAC1', 60: '481896BC4240', 61: '49263E82B2B8', 62: '4AF76F8D6BBB', 63: '4BC2016E5222', 64: '4CCF2D4FF5EC', 65: '4E9750936994', 66: '4F61F6A5588D', 67: '505F16F25595', 68: '50756E6D3B32', 69: '50E1E1F5F31D', 70: '516B4C9C3F45', 71: '52608C24A09E', 72: '52B2EBC622A6', 73: '539B8164BD32', 74: '5462E581A288', 75: '55149C502434', 76: '55D8B9306A65', 77: '5808368AFA0A', 78: '58F6BA305E2A', 79: '58FE73C690DA', 80: '596857EDC73F', 81: '599DF7F0CB41', 82: '59F1F27E85F4', 83: '5AE11428142F', 84: '5B27B574EA5B', 85: '5D3FA98DDD61', 86: '5DE6CFC7E471', 87: '5DF85F5EA21C', 88: '5EA87B759595', 89: '5EAA2E0BEAA2', 90: '5EAFEBA99A30', 91: '5EFC03FC84DF', 92: '5F6A8D18E234', 93: '6008B6021BAA', 94: '63765F49AC32', 95: '64099F419232', 96: '652349DF5059', 97: '6551FB43EE37', 98: '6613C12B0634', 99: '66C312BFDFD6', 100: '66D964D2E1D0', 101: '6790A35547E2', 102: '67A2603888E5', 103: '6991A9411704', 104: '6CFC28C22836', 105: '6D5DAED137C9', 106: '6EBB87FAD022', 107: '6EF1206450AF', 108: '70C74C90C3E2', 109: '71168B36CCFD', 110: '7177392ADD8B', 111: '74AF6AA78FB9', 112: '759CFBB05E2F', 113: '771E8EA5A4C7', 114: '7740740D57BE', 115: '7926DFB85C8B', 116: '7A6091203844', 117: '7C23D53CE5DD', 118: '7C4ED1AA239F', 119: '7E0C21E0010F', 120: '80E9914A0BF8', 121: '82867FEAF519', 122: '82C735B34C85', 123: '85EF1FFBAC47', 124: '872F22A4D018', 125: '87C72000AAB2', 126: '8978B70E88C3', 127: '8ADEF3F17E42', 128: '8B5F4EE22DF5', 129: '8B757ED14D67', 130: '8E0C10341AA8', 131: '90289E4E68F6', 132: '9259DEED6524', 133: '92754763710B', 134: '92B164934E01', 135: '96DBA1873BFF', 136: '97E7144ECEF9', 137: '9AE4EB9DF4F0', 138: '9CAC53908EE1', 139: '9F31161E7BDF', 140: 'A090B8A939CB', 141: 'A12E89E87CB5', 142: 'A31CA572620F', 143: 'A4263AA51F9A', 144: 'A540D6615FA0', 145: 'A56804CE6BAF', 146: 'A60313C4FC06', 147: 'A612803F81BA', 148: 'A77E12FFA171', 149: 'A87B6602E946', 150: 'AADE28D99973', 151: 'AEB37BE9DBFF', 152: 'B04ACAB6A193', 153: 'B41004303288', 154: 'B454AAFDA2AF', 155: 'B701B4E2F2BF', 156: 'B7EF621EC0AE', 157: 'B9084B8E2378', 158: 'BA8C4B0E8378', 159: 'BBD01B2776A8', 160: 'BE5377A632DF', 161: 'BE8D95B26DEE', 162: 'BEEB25AC3BB3', 163: 'BF585F42B5F6', 164: 'BF889C615B6A', 165: 'C1934D47BC69', 166: 'C31934680839', 167: 'C43F40D3D865', 168: 'C4955BCC1F0C', 169: 'C4F03F22DE3E', 170: 'C5BC9B26046C', 171: 'C5D2BE738C56', 172: 'C762399CAF83', 173: 'C7B9B444D117', 174: 'C943B9F6FDDF', 175: 'C9C7138CAF65', 176: 'CB66BE597E30', 177: 'CC7DA44E344E', 178: 'CE81A7E65B6B', 179: 'CE971F87D0B5', 180: 'CECC8C16ECAB', 181: 'D111860A3AC1', 182: 'D159C02757AE', 183: 'D33BB70DCA77', 184: 'D386F0671D80', 185: 'D43B801CCCA9', 186: 'D465BE3D4A94', 187: 'D49E08EEC650', 188: 'D4BD5D5DD7E4', 189: 'D64F455CB56A', 190: 'D6D99F00B58B', 191: 'D7774555E609', 192: 'D7CDFD417C01', 193: 'DBF16B9938A4', 194: 'DCC2FA798C09', 195: 'DE6E090827B8', 196: 'E25F5A55A4D8', 197: 'E5A82C4E86C7', 198: 'E5AC30A8337B', 199: 'E6EBC0EFBF18', 200: 'EB9BBBA2FEB9', 201: 'EC8A20CAC153', 202: 'EC8EA44FDACD', 203: 'ECB284CBDDA7', 204: 'EED0F8B3B968', 205: 'EF4B578B0902', 206: 'F13986786A7A', 207: 'F17F0E81FC73', 208: 'F34CFBCB7A28', 209: 'F396C1E8BF59', 210: 'F40ED923507F', 211: 'F87A72CF9671', 212: 'F8CDE15A2FCB', 213: 'F9032EE897A9', 214: 'FAC08B5AA521', 215: 'FB3071FBA3BC', 216: 'FC6435726337', 217: 'FD5F2F4D32D7', 218: 'FD6E925243AA', 219: 'FDA85734568D', 220: 'FF18E7D41654', 221: 'FFEC03758A05'}, 'Code': {0: 375000, 1: 275000, 2: 225000, 3: 275000, 4: 175000, 5: 275000, 6: 295000, 7: 525000, 8: 175000, 9: 135000, 10: 275000, 11: 250000, 12: 275000, 13: 350000, 14: 225000, 15: 175000, 16: 395000, 17: 275000, 18: 225000, 19: 195000, 20: 225000, 21: 175000, 22: 135000, 23: 225000, 24: 250000, 25: 225000, 26: 250000, 27: 295000, 28: 275000, 29: 250000, 30: 275000, 31: 250000, 32: 295000, 33: 195000, 34: 275000, 35: 195000, 36: 275000, 37: 175000, 38: 525000, 39: 225000, 40: 350000, 41: 135000, 42: 295000, 43: 195000, 44: 495000, 45: 495000, 46: 275000, 47: 375000, 48: 295000, 49: 250000, 50: 250000, 51: 225000, 52: 175000, 53: 250000, 54: 475000, 55: 135000, 56: 350000, 57: 225000, 58: 250000, 59: 275000, 60: 225000, 61: 295000, 62: 225000, 63: 250000, 64: 225000, 65: 250000, 66: 135000, 67: 175000, 68: 295000, 69: 175000, 70: 295000, 71: 295000, 72: 225000, 73: 225000, 74: 365000, 75: 295000, 76: 225000, 77: 195000, 78: 225000, 79: 225000, 80: 225000, 81: 295000, 82: 135000, 83: 195000, 84: 295000, 85: 550000, 86: 250000, 87: 225000, 88: 275000, 89: 225000, 90: 295000, 91: 250000, 92: 250000, 93: 225000, 94: 175000, 95: 250000, 96: 175000, 97: 350000, 98: 175000, 99: 275000, 100: 295000, 101: 225000, 102: 225000, 103: 195000, 104: 175000, 105: 350000, 106: 175000, 107: 275000, 108: 275000, 109: 175000, 110: 195000, 111: 225000, 112: 275000, 113: 375000, 114: 135000, 115: 135000, 116: 395000, 117: 295000, 118: 195000, 119: 275000, 120: 195000, 121: 375000, 122: 195000, 123: 275000, 124: 275000, 125: 175000, 126: 325000, 127: 275000, 128: 250000, 129: 135000, 130: 175000, 131: 195000, 132: 550000, 133: 225000, 134: 250000, 135: 350000, 136: 495000, 137: 275000, 138: 135000, 139: 175000, 140: 175000, 141: 225000, 142: 175000, 143: 275000, 144: 325000, 145: 295000, 146: 275000, 147: 275000, 148: 175000, 149: 350000, 150: 550000, 151: 250000, 152: 350000, 153: 325000, 154: 175000, 155: 250000, 156: 175000, 157: 250000, 158: 275000, 159: 225000, 160: 195000, 161: 175000, 162: 225000, 163: 275000, 164: 225000, 165: 135000, 166: 250000, 167: 225000, 168: 175000, 169: 275000, 170: 175000, 171: 275000, 172: 175000, 173: 195000, 174: 325000, 175: 275000, 176: 295000, 177: 350000, 178: 350000, 179: 425000, 180: 225000, 181: 135000, 182: 150000, 183: 135000, 184: 350000, 185: 225000, 186: 375000, 187: 175000, 188: 295000, 189: 195000, 190: 350000, 191: 175000, 192: 225000, 193: 195000, 194: 195000, 195: 350000, 196: 250000, 197: 175000, 198: 175000, 199: 395000, 200: 175000, 201: 225000, 202: 175000, 203: 350000, 204: 175000, 205: 250000, 206: 375000, 207: 275000, 208: 525000, 209: 175000, 210: 375000, 211: 295000, 212: 275000, 213: 175000, 214: 325000, 215: 250000, 216: 195000, 217: 275000, 218: 250000, 219: 135000, 220: 195000, 221: 135000}}

What I want is to select random 5 rows first

import random
import pandas as pd 
data = pd.DataFrame(df)
inputt=pd.DataFrame({"NameID":data1.Name[random.sample(range(10, 30), 5)]})
for i in range(len(inputt.index)):
      D1 = data[data["Name"] == inputt["NameID"].iloc[i]]
      D2 =  D2.append(D1)

values=D2.Code       
real_sum=values.sum()

and then I want to perform the same operation on the rest of the rows in the data frame and figure which data frame with such rows has sum less than the real_sum.Is there any simulation technique I can apply here or anything else ?

Thanks



from Perform a single task on (220 choose 5) combination in a dataframe

How to add a disk to VM when cloning from a template?

I've found pyvmomi examples on how to add a disk to an already existing VM, but I would like to customize the VM template and then clone. Setting the CPUs and memory are pretty straight forward, but the adding of one or more disks to an existing template, asides from the boot disk, eludes me.

# Add an additional 200 GB disk
new_disk_kb = int(20) * 1024 * 1024
disk_spec = vim.vm.device.VirtualDeviceSpec()
disk_spec.fileOperation = "create"
disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
disk_spec.device = vim.vm.device.VirtualDisk()
disk_spec.device.backing = vim.vm.device.VirtualDisk.RawDiskMappingVer1BackingInfo()
disk_spec.device.backing.diskMode = 'persistent'
disk_spec.device.unitNumber = 3
disk_spec.device.capacityInKB = new_disk_kb

# vm configuration
vmconf = vim.vm.ConfigSpec()
vmconf.numCPUs = 8            # change the template's cpus from 4 to 8
vmconf.memoryMB = 16 * 1024   # change the template's memory from 4 GB to 16 GB
# change the template's disks from
#    1 250 GB boot, 1 x 200 GB disk
# to 
#    1 250 gB boot, 2 x 200 GB disks
vmconf.deviceChange = [ disk_spec ]  # something is not right
                               
clonespec = vim.vm.CloneSpec()
clonespec.location = relospec
clonespec.powerOn = True
clonespec.config = vmconf
clonespec.customization = customspec
task = template.Clone(folder = destfolder, name = vmname, spec = clonespec)

The code works without the vmconf.deviceChange. Once I try to add a disk I see the error

Invalid configuration for device '0'.

or

Incompatible device backing specified for device '0'.


from How to add a disk to VM when cloning from a template?

Gulp & Babel polyfill Promises for IE11 issue

I have an old project written in Angular.js. I need to polyfill promises for IE11 but it's not working.

In gulpfile.js I have requires for Babel stuff

var corejs = require('core-js/stable'),
    regenerator = require('regenerator-runtime/runtime'),
    presetEnv = require('@babel/preset-env'),
    concat = require('gulp-concat'),
    gulp = require('gulp'),
    babel = require('gulp-babel'),
    babelRegister = require ('@babel/register'),

And here I am using the pipe

var envJS = function () {
    var condition = (config[environment].compression);
    return gulp.src(paths.appJS)
        .pipe(babel(
            {
                "presets": [
                    [ "@babel/preset-env", {
                      "targets": {
                          "browsers": ["ie >= 11"]
                      },
                      "useBuiltIns": "entry",
                      "corejs": 3 
                    }]
                  ]
            }
        ))
        .pipe(ngAnnotate())
        //.pipe(gulpif(!condition, jshint()))
        //.pipe(gulpif(!condition, jshint.reporter('default')))
        .pipe(addStream.obj(prepareTemplates()))
        .pipe(configServer())
        .pipe(gulpif(condition, uglify({mangle: true})))
        .pipe(concat(randomNames.js))
        .pipe(gulp.dest(folderDist))
        .pipe(connect.reload());
};

The code builds and works on chrome but still have the issue on IE11 which means it's not polyfilling the Promise object.

I am stuck and don't have any ideas what else should I do.



from Gulp & Babel polyfill Promises for IE11 issue

Deep learning to classify a time series of xy spatial coordinates - python

I've got a few problems with a DL classification problem. I'll attach a brief example of the training data to help describe the problem.

The data is a time series of xy points, which is made up of smaller sub-sequences event. So each unique event is independent. I have two unique sequences (10,20) below of even time length. For a given sequence, each individual point has its own unique identifier user_id. The xy trace of these points will vary marginally over a given sequence, with the specific time period found in interval. I also have a separate xy point used as a reference (centre_x, center_y), which details the approx middle/centre of all points.

Lastly, the target_label classifies where these points are relative to each other. So using the centre_x, center_y as a reference, there are 5 class Middle, Top, Bottom, Right, Left. There can only be one label for each unique event.

Problems:

  1. Obviously small dataset but I'm concerned with the accuracy accuracy. I think I need to incorporate the reference point (centre_x, center_y)

  2. I'm getting all these warning for each test iteration. I think it has something to do with converting to a tensor but it doesn't;t help anything.

    WARNING:tensorflow:7 out of the last 7 calls to <function Model.make_test_function..test_function at 0x7faa21629820> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.

example df:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

# number of intervals
n = 10

# center locations for points
locs_1 = {'A': (5,5),
      'B': (5,8),
      'C': (5,2),
      'D': (8,5)}

# initialize data 
data_1 = pd.DataFrame(index=range(n*len(locs_1)), columns=['x','y','user_id'])
for i, group in enumerate(locs_1.keys()):

    data_1.loc[i*n:((i+1)*n)-1,['x','y']] = np.random.normal(locs_1[group], 
                                                       [0.2,0.2], 
                                                       [n,2]) 
    data_1.loc[i*n:((i+1)*n)-1,['user_id']] = group

# generate time interavls
data_1['interval'] = data_1.groupby('user_id').cumcount() + 1

# assign unique string to differentiate sequences
data_1['event'] = 10

# center of all points for unqiue sequence 1
data_1['center_x'] = 5
data_1['center_y'] = 5

# classify labels
data_1['target_label'] = ['Middle' if ele  == 'A' else 'Top' if ele == 'B' else 'Bottom' if ele == 'C' else 'Right' for ele in data_1['user_id']]

# center locations for points
locs_2 = {'A': (14,15),
      'B': (16,15),
      'C': (15,12),
      'D': (19,15)}

# initialize data 
data_2 = pd.DataFrame(index=range(n*len(locs_2)), columns=['x','y','user_id'])
for i, group in enumerate(locs_2.keys()):

    data_2.loc[i*n:((i+1)*n)-1,['x','y']] = np.random.normal(locs_2[group], 
                                                       [0.2,0.2], 
                                                       [n,2]) 
    data_2.loc[i*n:((i+1)*n)-1,['user_id']] = group

# generate time interavls
data_2['interval'] = data_2.groupby('user_id').cumcount() + 1

# center of points for unqiue sequence 1
data_2['event'] = 20

# center of all points for unqiue sequence 2
data_2['center_x'] = 15
data_2['center_y'] = 15

# classify labels
data_2['target_label'] = ['Middle' if ele  == 'A' else 'Middle' if ele == 'B' else 'Bottom' if ele == 'C' else 'Right' for ele in data_2['user_id']]

df = pd.concat([data_1, data_2])

df = df.sort_values(by = ['event','interval','user_id']).reset_index(drop = True)

df:

            x          y user_id  interval  event  center_x  center_y target_label
0    5.288275   5.211246       A         1     10         5         5       Middle
1    4.765987   8.200895       B         1     10         5         5          Top
2    4.943518   1.645249       C         1     10         5         5       Bottom
3    7.930763   4.965233       D         1     10         5         5        Right
4    4.866746   4.980674       A         2     10         5         5       Middle
..        ...        ...     ...       ...    ...       ...       ...          ...
75  18.929254  15.297437       D         9     20        15        15        Right
76  13.701538  15.049276       A        10     20        15        15       Middle
77  16.028816  14.985672       B        10     20        15        15       Middle
78  15.044336  11.631358       C        10     20        15        15       Bottom
79   18.95508  15.217064       D        10     20        15        15        Right

Model:

labels = df['target_label'].dropna().sort_values().unique()

n_samples = df.groupby(['user_id', 'event']).ngroups
n_ints = 10

X = df[['x','y']].values.reshape(n_samples, n_ints, 2).astype('float32')

y = df.drop_duplicates(subset = ['event','user_id','target_label'])

y = np.array(y['target_label'].groupby(level = 0).apply(lambda x: [x.values[0]]).tolist())

y = label_binarize(y, classes = labels)

# test, train split
trainX, testX, trainy, testy = train_test_split(X, y, test_size = 0.2)

# load the dataset, returns train and test X and y elements
def load_dataset():

    # test, train split
    trainX, testX, trainy, testy = train_test_split(X, y, test_size = 0.2)

    return trainX, trainy, testX, testy

# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
    verbose, epochs, batch_size = 0, 10, 32
    n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
    model = Sequential()
    model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
    model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
    model.add(Dropout(0.5))
    model.add(MaxPooling1D(pool_size=2))
    model.add(Flatten())
    model.add(Dense(100, activation='relu'))
    model.add(Dense(n_outputs, activation='softmax'))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    # fit network
    model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
    # evaluate model
    _, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
    return accuracy

# summarize scores
def summarize_results(scores):
    print(scores)
    m, s = np.mean(scores), np.std(scores)
    print('Accuracy: %.3f%% (+/-%.3f)' % (m, s))

# run an experiment
def run_experiment(repeats=10):
    # load data
    trainX, trainy, testX, testy = load_dataset()
    # repeat experiment
    scores = list()
    for r in range(repeats):
        #r = tf.convert_to_tensor(r, dtype=tf.int32)
        score = evaluate_model(trainX, trainy, testX, testy)
        score = score * 100.0
        print('>#%d: %.3f' % (r+1, score))
        scores.append(score)
    # summarize results
    summarize_results(scores)

# run the experiment
run_experiment()


from Deep learning to classify a time series of xy spatial coordinates - python

Azure Translation API - Throttling client requests

I'm trying to throttle the number of requests a client can make to my translator service which uses Azure Translation API.

The following link from Microsoft describes how to limit requests, but it's not clear where in the request this throttling information should be added. I assume the request headers?

https://docs.microsoft.com/en-us/azure/api-management/api-management-sample-flexible-throttling

Here is the curl. Note the rate limiting headers at the end. Is this the way to do it?

// Pass secret key and region using headers to a custom endpoint
curl -X POST " my-ch-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
-H "Ocp-Apim-Subscription-Key: xxx" \
-H "Ocp-Apim-Subscription-Region: switzerlandnorth" \
-H "Content-Type: application/json" \
-H "rate-limit-by-key: calls=10 renewal-period=60 counter-key=1.1.1.1" \
-d "[{'Text':'Hello'}]" -v


from Azure Translation API - Throttling client requests

How to print iterations per second?

I have a small Python script which sends POST requests to a server and gets their response.

It iterates 10000 times, and I managed to print the current progress in command prompt using:

code=current_requestnumber
print('{0}/{1}'.format(str(code),"10000"),end="\r")

at the end of each loop.

Because this involves interaction with a webserver, I would like to show the current average speed next to this too (updated like every 2 seconds).

An example at the bottom of the command prompt would then be like this:

(1245/10000), 6.3 requests/second

How do I achieve this?



from How to print iterations per second?

Firebase UI: updateCurrentUser failed: First argument "user" must be an instance of Firebase User or null

I recently updated to firebase 8, and related deps including firebaseui 4.8.1 (I have tried several versions, they all give me this error). It loads a login box, but when trying to enter creds or create a new user, I get this error:

updateCurrentUser failed: First argument "user" must be an instance of Firebase User or null.

How can I restore the auth functionality?

firebase.js

import firebase from 'firebase/firebase';
import '@firebase/firestore';
import { FIREBASE_CONFIG } from '../keys';

const config = FIREBASE_CONFIG;

export const firebaseApp = firebase.initializeApp(config);
export const db = firebaseApp.firestore();

/store/index.js (abbreviated)

import firebase from 'firebase/firebase';
import * as firebaseui from 'firebaseui';
import '@firebase/firestore';

// This is our firebaseui configuration object
const uiConfig = ({
    signInSuccessUrl: '',
    signInOptions: [
        firebase.auth.EmailAuthProvider.PROVIDER_ID,
    ],
    callbacks: {
        signInSuccessWithAuthResult: authResult => {
            store.dispatch(login(authResult));
        },
    },
});

// This sets up firebaseui
const ui = new firebaseui.auth.AuthUI(firebase.auth());

// This adds firebaseui to the page
export const startFirebaseUI = elementId => {
    ui.start(elementId, uiConfig);
};

I have tried solutions suggested in these discussions, but they are not working for me. https://github.com/firebase/firebaseui-web/issues/536 https://github.com/firebase/firebaseui-web/issues/776

I'm seeing what looks like a valid auth response in the console too. Here is the verifyPassword response:

{
  "kind": "identitytoolkit#VerifyPasswordResponse",
  "localId": "VgZWszYv8cYDJLQsIr3YzwcZJ4s1",
  "email": "user@gmail.com",
  "displayName": "My User",
  "idToken": "<<a_long_token>>",
  "registered": true,
  "refreshToken": "<<another_token>>",
  "expiresIn": "3600"
}

Here is the getAccountInfo response:

{
  "kind": "identitytoolkit#GetAccountInfoResponse",
  "users": [
    {
      "localId": "VgZWszYv8cYDJLQsIr3YzwcZJ4s1",
      "email": "user@gmail.com",
      "displayName": "Me",
      "passwordHash": "<<a hash>>",
      "emailVerified": false,
      "passwordUpdatedAt": 1561313504412,
      "providerUserInfo": [
        {
          "providerId": "password",
          "displayName": "Me",
          "federatedId": "user@gmail.com",
          "email": "user@gmail.com",
          "rawId": "user@gmail.com"
        }
      ],
      "validSince": "1561313504",
      "lastLoginAt": "1627418219738",
      "createdAt": "1561313504412",
      "lastRefreshAt": "2021-07-27T20:36:59.738Z"
    }
  ]
}



from Firebase UI: updateCurrentUser failed: First argument "user" must be an instance of Firebase User or null

Using AJAX, JavaScript to call python flask function with return to JavaScript

I know that you can have javascript to call a python flask function and be able to return data to a designated id. Like this:

HTML

<div id = "update"> </div>

<script type="text/javascript">
   var counter = 0;
   window.setInterval(function(){
      $("#update").load("/game?counter=" + counter);
      counter++;
   }, 5000)

views.py

from flask import request

@app.route("/game")
def live_game():
    textlist = ['a','b','c','d']
    counter = request.args.get('counter')
    return "<p> " + textlist[counter] + " </p>"

I found this in a previous post. What I would like to do is update utilize this method in updating some cool justgage that I found online to show the most up to date temperature and humidity readings from my database. Here is the script that I was wanting to use:

<div class="container">
  <div class="jumbotron">
    <h1>Historical Environmental Readings</h1>      
    <div class="container-fluid" id="dht-container">
      <div id="g1" style="width: 200px; height: 150px;"></div>
      <div id="g2" style="width: 200px; height: 150px;"></div>
    </div>
  </div>

.....

<script>
        function ajaxd(NodeID) { 
        //reload result into element with id "dht-container"
        $(??????).load("/tempnodeDHT", function() {  alert( "Temp Load was performed." ); });

        
        document.addEventListener("DOMContentLoaded", function (event) {
            var g1 = new JustGage({
                id: "g1",
                value: 50,
                min: -20,
                max: 150,
                title: "DHT Temp",
                label: "temperature",
                pointer: true,
                textRenderer: function (val) {
                    if (val < NODE_EnvCP.low_temp) {
                        return 'Cold';
                    } else if (val > NODE_EnvCP.hot) {
                        return 'Hot';
                    } else if (val === NODE_EnvCP.optimum_temp) {
                        return 'OK';
                    }
                },
            });
            var g2 = new JustGage({
                id: "g2",
                value: 50,
                min: 0,
                max: 100,
                title: "Target",
                label: "Humidity",
                pointer: true,
                textRenderer: function (val) {
                    if (val < NODE_EnvCP.low_hum) {
                        return 'LOW';
                    } else if (val > NODE_EnvCP.high_hum) {
                        return 'HIGH';
                    } else if (val === NODE_EnvCP.optimum_hum) {
                        return 'OK';
                    }
                },
            });

            setInterval(function () {
             (currentTemp, currentHumidity)=ajaxd();
            g1.refresh(currentTemp);
            g2.refresh(currentHumidity);
                return false;
            }, 2500);
        });
    </script>

This is my python flask function:

@app.route('/nodeDHT')
def getLatestDHT():
    NodeID = request.args.get('NodeID')
    df = DAO.Pull_CURRENT_DHT_Node_Data(self, NodeID)
    currentTemp = df.Temperature[0]
    currentHumidity = df.Humidity[0]
    return (currentTemp, currentHumidity)

I was hoping that I could change the ?????? inside

$(??????).load("/tempnodeDHT", function() {  alert( "Temp Load was performed." ); });

so that the two variables (currentTemp, currentHumidity) would end up back into the javascript portion so that the gages would update every 2.5 seconds. Also, am I passing the variable back to the python flask? there is a variable already pushed to the html when it was rendered.

EDIT:

could I do something like this:

@app.route('/nodeDHT')
def getLatestDHT():
    NodeID = request.args.get('NodeID')
    df = DAO.Pull_CURRENT_DHT_Node_Data(self, NodeID)
    currentTemp = df.Temperature[0]
    currentHumidity = df.Humidity[0]
    return json.dumps(currentTemp, currentHumidity)

and in the javascript side do something like this?

    function ajaxd(NodeID) { 
    //reload result into javascript
    $.get("/nodeDHT",function( currentTemp, currentHumidity ){ console.log($.parseJSON(currentTemp, currentHumidity)});

What I'm really asking is. How can I pass single/multiple variables to the python flask function from the javascript function and then get back a dataframe where I can use column values to update a chart or multiple variables back to the javascript function to be used in a setinterval to be used for multiple functions such as updating justgage

    setInterval(function () {
     (currentTemp, currentHumidity)=ajaxd();
    g1.refresh(currentTemp);
    g2.refresh(currentHumidity);
        return false;
    }, 2500);


from Using AJAX, JavaScript to call python flask function with return to JavaScript

How do I gracefully prevent crashes in react native?

I would like to gracefully show an empty View when any error occurs (syntax, undefined, type errors, etc.)

This is what I've tried, but it doesn't seem to fail gracefully. The whole app still crashes with this implementation.

const Parent = (props) => {
    try{
        return (<Child/>) //if Child logic crashes for any reason,  return a blank view.
    }catch(err){
        return <View/>
    }
}


from How do I gracefully prevent crashes in react native?

Make gradient start further away from end of view with GradientDrawable

I am trying to set the background of a view to have a gradient whose color is generated from the Palette api

The gradient will go from a solid and fade out but I want the solid portion to take up a majority of the background. Right now it starts solid and then gradually fades out over the view width, I want it to where it will start fading out from around the center of the view width.

Here is what I do

            Palette.from(resource!!.toBitmap()).generate {
                if (it != null) {
                    val paletteColor = it.getDarkVibrantColor("#000000".toColorInt())

                    val gradientDrawable = GradientDrawable(
                        GradientDrawable.Orientation.LEFT_RIGHT,
                        intArrayOf(colorWithAlpha(paletteColor, 0f), colorWithAlpha(paletteColor, 1.0f))
                    )
                    gradientDrawable.cornerRadius = 0f
                    _contentTextBackground.background = gradientDrawable
                }
            }

Is there a way to set the gradient to start further away from the end of the view?



from Make gradient start further away from end of view with GradientDrawable

Friday 30 July 2021

Group unique users with two changing IDs

Can you think of a faster algorithm for this problem? Or improve the code?

Problem:

I have two customer IDs:

  • ID1 (e.g. phone number)
  • ID2 (e.g. email address)

A user sometimes change their ID1 and sometimes ID2. How can I find unique users?

Example:

ID1 = [7, 7, 8, 9]

ID2 = [a, b, b, c]

Desired result:

ID3 = [Anna, Anna, Anna, Paul]

enter image description here

The real world scenario has ca. 600 000 items per list.

There is already an SQL idea here: How can I match Employee IDs from 2 columns and group them into an array?

And I got help from a friend which had this idea with TypeScript: https://stackblitz.com/edit/typescript-leet-rewkmh?file=index.ts

A second friend of mine helped me with some pseudo-code, and I was able to create this:

Fastest working code so far:

ID1 = [7, 7, 8, 9]
ID2 = ["a", "b", "b", "c"]

def timeit_function(ID1, ID2):
    
    def find_user_addresses():
        phone_i = []
        email_i = []
        
        tmp1 = [ID1[0]]
        tmp2 = []
        tmp_index = []

        while len(tmp1) != 0 or len(tmp2) != 0:
            while len(tmp1) != 0:
                tmp_index = []  
                for index, value in enumerate(ID1):
                    if value == tmp1[0]:
                        tmp2.append(ID2[index])
                        tmp_index.insert(-1, index)

                for i in tmp_index: 
                    del ID1[i]
                    del ID2[i]
                tmp1 = list(dict.fromkeys(tmp1))
                phone_i.append(tmp1.pop(0))

            while len(tmp2) != 0:
                tmp_index = [] 
                for index, value in enumerate(ID2):
                    if value == tmp2[0]:
                        tmp1.append(ID1[index])
                        tmp_index.insert(0, index)

                for i in tmp_index: 
                    del ID1[i]
                    del ID2[i]
                tmp2 = list(dict.fromkeys(tmp2))
                email_i.append(tmp2.pop(0))

        return phone_i, email_i
    
    users = {}
    i = 0
    while len(ID1) != 0:
        phone_i, email_i = find_user_addresses()
        users[i] = [phone_i, email_i]
        i += 1
    return users

Output:

{0: [[7, 8], ['a', 'b']], 1: [[9], ['c']]}

Meaning: {User_0: [[phone1, phone2], [email1, email2]], User_1: [phone3, email3]}

%timeit timeit_function(ID1, ID2)

575 ns ± 3.86 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)


from Group unique users with two changing IDs

Using Grad-Cam for the edge tpu Coral device

I am using the Edge TPU Coral USB device for model inferencing, where I am doing image classification tasks. I have a custom trained tflite model which is being used for the classification. What I am trying to do is to run grad-cam, so I can visualize the activation maps for the image being classified for on an already trained TFLite model. For some reason, I can't find a tutorial. Based on this question, it should be possible: Is it possible to apply GradCam to a TF Lite model. However, there isn't a clear explanation on how to perform the layers check for the inputs and the gradient data. I currently have this grad-cam example:

I am using the grad-cam from this author in Google Colab: Colab Notebook

Additionally, for image classification I have the following code:

import argparse
import time

from PIL import Image
from pycoral.adapters import classify
from pycoral.adapters import common
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
import cv2 as cv
import numpy as np


def main():
  parser = argparse.ArgumentParser(
      formatter_class=argparse.ArgumentDefaultsHelpFormatter)
  parser.add_argument('-m', '--model', required=True,
                      help='File path of .tflite file.')
  parser.add_argument('-i', '--input', required=True,
                      help='Image to be classified.')
  parser.add_argument('-l', '--labels',
                      help='File path of labels file.')
  parser.add_argument('-k', '--top_k', type=int, default=2,
                      help='Max number of classification results')
  parser.add_argument('-t', '--threshold', type=float, default=0.0,
                      help='Classification score threshold')
  parser.add_argument('-c', '--count', type=int, default=5,
                      help='Number of times to run inference')
  args = parser.parse_args()

  labels = read_label_file(args.labels) if args.labels else {}

  interpreter = make_interpreter(*args.model.split('@'))
  interpreter.allocate_tensors()
  print(interpreter)

  size = common.input_size(interpreter)
  image = cv.imread(args.input)
  image = cv.normalize(image, image, 0, 255, cv.NORM_MINMAX)
  common.set_input(interpreter, image)

  print('----INFERENCE TIME----')
  print('Note: The first inference on Edge TPU is slow because it includes',
        'loading the model into Edge TPU memory.')
  for _ in range(args.count):
    start = time.perf_counter()
    interpreter.invoke()
    inference_time = time.perf_counter() - start
    classes = classify.get_classes(interpreter, args.top_k, args.threshold)
    print('%.1fms' % (inference_time * 1000))

  print('-------RESULTS--------')
  for c in classes:
    print('%s: %.5f' % (labels.get(c.id, c.id), c.score))


if __name__ == '__main__':
  main()

I am honestly puzzled on how to access the layers of a TFlite model to check for the gradient values, given that a TFLite model uses tensors, I'd like to know how can i use the pycoral/tflite libraries to use grad-cam, instead of keras/tensorflow models?



from Using Grad-Cam for the edge tpu Coral device

using dotenv with react & webpack

I'm finding information that you can use dotenv with react using

import React from "react"
console.log(process.env.REACT_APP_API_KEY)

however when I create my .env file in the root of my direction i get a undefined message in the console.

I should note that i am NOT using react-create-app.

Here is my .env file

REACT_APP_API_KEY=secretKey

Here is my webpack config file.

Is there any way I can use dotenv without using node.js and creating a small server.

const currentTask = process.env.npm_lifecycle_event;
const path = require("path");
const { CleanWebpackPlugin } = require("clean-webpack-plugin");
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
const { postcss } = require("postcss-mixins");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const fse = require("fs-extra");

const postCSSPlugins = [
  require("postcss-import"),
  require("postcss-mixins"),
  require("postcss-simple-vars"),
  require("postcss-nested"),
  require("postcss-hexrgba"),
  require("autoprefixer")
];

class RunAfterCompile {
  apply(compiler) {
    compiler.hooks.done.tap("Copy images", function () {
      fse.copySync("./app/assets/images", "./docs/assets/images");
    });
  }
}

let cssConfig = {
  test: /\.css$/i,
  use: [
    "css-loader?url=false",
    { loader: "postcss-loader", options: { plugins: postCSSPlugins } }
  ]
};

let pages = fse
  .readdirSync("./app")
  .filter(function (file) {
    return file.endsWith(".html");
  })
  .map(function (page) {
    return new HtmlWebpackPlugin({
      filename: page,
      template: `./app/${page}`
    });
  });

let config = {
  entry: "./app/assets/scripts/App.js",
  plugins: pages,
  module: {
    rules: [
      cssConfig,
      {
        test: /\.js$/,
        exclude: /(node_modules)/,
        use: {
          loader: "babel-loader",
          options: {
            presets: ["@babel/preset-react", "@babel/preset-env"],
            plugins: ["@babel/plugin-transform-runtime"]
          }
        }
      }
    ]
  }
};

if (currentTask == "dev") {
  cssConfig.use.unshift("style-loader");
  config.output = {
    filename: "bundled.js",
    path: path.resolve(__dirname, "app")
  };
  config.devServer = {
    before: function (app, server) {
      server._watch("./app/**/*.html");
    },
    contentBase: path.join(__dirname, "app"),
    hot: true,
    port: 3000,
    host: "0.0.0.0",
    historyApiFallback: { index: "/" }
  };
  config.mode = "development";
}

if (currentTask == "build") {
  cssConfig.use.unshift(MiniCssExtractPlugin.loader);
  postCSSPlugins.push(require("cssnano"));
  config.output = {
    filename: "[name].[chunkhash].js",
    chunkFilename: "[name].[chunkhash].js",
    path: path.resolve(__dirname, "docs")
  };
  config.mode = "production";
  config.optimization = {
    splitChunks: { chunks: "all" }
  };
  config.plugins.push(
    new CleanWebpackPlugin(),
    new MiniCssExtractPlugin({ filename: "styles.[chunkhash].css" }),
    new RunAfterCompile()
  );
}

module.exports = config;

i've been at this for a couple of hours now and I can't seem to find what i need online. Hoping someone has ran into this issue before.

thanks



from using dotenv with react & webpack

Error reading Sqlite database: Database google_app_measurement_local.db not found

Suddenly I'm getting this in the event log when launching the app Database Inspector: Error reading Sqlite database: Database 'LiveSqliteDatabaseId(path=/data/data/app-dir/databases/google_app_measurement_local.db, name=google_app_measurement_local.db, connectionId=1) not found

Upon checking the databases path, I saw there is google_app_measurement_local.db in the directory. Any idea what causing this to show up?

BTW, I'm using Android Studio 4.2 and here the versions of Firebase for this app

implementation "com.google.firebase:firebase-core:19.0.0"
implementation 'com.google.firebase:firebase-messaging:22.0.0'
implementation 'com.google.firebase:firebase-analytics:19.0.0'
implementation 'com.google.firebase:firebase-crashlytics:18.0.0'
implementation "com.google.firebase:firebase-database:20.0.0"
implementation "com.google.firebase:firebase-config:21.0.0"


from Error reading Sqlite database: Database google_app_measurement_local.db not found

How to avoid duplicate drag in mxgraph

Hi I want to avoid duplicate drag on mxgraph canvas.

let say I have dragged Pipe on canvas 2nd time it should not allow it be dragged on canvas.

Question: how to avoid duplicate drag on canvas

Here is my working code drag with duplicate allowed

Drag and Drop

var graph = {};

function initCanvas() {

  //This function is called onload of body itself and it will make the mxgraph canvas
  graph = new mxGraph(document.getElementById('graph-wrapper'));
  graph.htmlLabels = true;
  graph.cellsEditable = false;

  // render as HTML node always. You probably won't want that in real world though
  graph.convertValueToString = function(cell) {
    return cell.value;
  }

  const createDropHandler = function (cells, allowSplit) {
    return function (graph, evt, target, x, y) {
      const select = graph.importCells(cells, x, y, target);
      graph.setSelectionCells(select);
    };
  };

  const createDragPreview = function (width, height) {
    var elt = document.createElement('div');
    elt.style.border = '1px dashed black';
    elt.style.width = width + 'px';
    elt.style.height = height + 'px';
    return elt;
  };

  const createDragSource = function (elt, dropHandler, preview) {
    return mxUtils.makeDraggable(elt, graph, dropHandler, preview, 0, 0, graph.autoscroll, true, true);
  };

  const createItem = (id) => {

    const elt = document.getElementById(id);
    const width = elt.clientWidth;
    const height = elt.clientHeight;

    const cell = new mxCell('', new mxGeometry(0, 0, width, height), 'fillColor=none;strokeColor=none');
    cell.vertex = true;
    graph.model.setValue(cell, elt);

    const cells = [cell];

    const bounds = new mxRectangle(0, 0, width, height);
    createDragSource(elt, createDropHandler(cells, true, false, bounds), createDragPreview(width, height), cells, bounds);
  };


  createItem("shape_1");
  createItem("shape_2");
  createItem("shape_3");
}
#graph-wrapper {
  background: #333;
  width: 100%;
  height: 528px;
 }
<html>

<head>
    <title>Toolbar example for mxGraph</title>

    <script type="text/javascript">
        mxBasePath = 'https://jgraph.github.io/mxgraph/javascript/src';
    </script>
    <script src="https://jgraph.github.io/mxgraph/javascript/src/js/mxClient.js"></script>
    <script src="./app.js"></script>
</head>

<body onload="initCanvas()">
    <h4>Drag same box 2 times on the canvas. see duplicate is allowed</h4>

    <div>
        <div id="shape_1"
            style="width: 100px; height: 100px; border-radius: 50%; background: red; display: inline-flex; text-align: center; color: #fff; align-items: center; justify-content: center;">
            Pipe
        </div>

        <div draggable="true" id="shape_2"
            style="width: 100px; height: 100px; border-radius: 5%; background: orange; display: inline-flex; text-align: center; color: #fff; align-items: center; justify-content: center;">
            Team
        </div>
        
        <div draggable="true" id="shape_3"
            style="width: 100px; height: 64px; background: #009688; display: inline-flex; text-align: center; color: #fff; align-items: center; justify-content: center; border-radius: 207px; flex-direction: column;">
            <div> <svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 0 24 24" width="24px" fill="#000000">
                    <path d="M0 0h24v24H0V0z" fill="none" />
                    <path
                        d="M11 7h2v2h-2zm0 4h2v6h-2zm1-9C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm0 18c-4.41 0-8-3.59-8-8s3.59-8 8-8 8 3.59 8 8-3.59 8-8 8z" />
                </svg></div>
            <div>Info</div>
        </div>
    </div>

    <div id="graph-wrapper">

    </div>
</body>

</html>


from How to avoid duplicate drag in mxgraph

postgres init code with multiple sequelize

I am trying to convert js code to ts code (and understanding why something have been done).

This my JS code

use strict';

const fs = require('fs');
const path = require('path');
const Sequelize = require('sequelize');
const basename = path.basename(__filename);
const env = process.env.NODE_ENV || 'development';
const config = require(__dirname + '/../config/config.js')[env];
const db = {};

let sequelize;
if (config.use_env_variable) {
  sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
  sequelize = new Sequelize(config.database, config.username, config.password, config);
}


Object.keys(db).forEach(modelName => {
  if (db[modelName].associate) {
    db[modelName].associate(db);
  }
});

db.sequelize = sequelize;
db.Sequelize = Sequelize;

module.exports = db;

Here, there are multiple sequalise

db.sequelize = sequelize;
db.Sequelize = Sequelize;

Can someone please help me in understand meaning, purpose and why both of them exist? and where we would use each of them.

I am able to make sense for sequelize but not for Sequelize. like we already have used new Sequelize( so why would we add it in db object?



from postgres init code with multiple sequelize

Exoplayer Android. How to record the streamed RTSP video

I am successful in streaming the RTSP live video via hikvision IPCamera to my android app. Now I want to record the streaming video in the mobile app itself. How would I do it? Can I have some guidance?

Thank you



from Exoplayer Android. How to record the streamed RTSP video

Find elements in a specific area of svg using only getBoundingClientRect

I'm creating an SVG file using python svgwrite and there's a shape in the center of my drawing that I created with a path. I want to remove elements that are not inside my wrapper shape.

first of all, can I remove them in the svgwrite? if not how can I find all the elements in the frontend using javascript to remove any of them that are not inside my shape?

svgwrite

# dots that are genrated in the svg are like this
image.add(image.circle((x, y), mag, id='dot', stroke="none", fill=color))

# this is my heart shape that should dots go inside of it
image.defs.add(image.path(d="M0 200 v-200 h200 a100,100 90 0,1 0,200 a100,100 90 0,1 -200,0z", id="heart_shape", style="rotate: 225deg;scale:1.9;stroke: #fff;", opacity="1"))
image.add(image.use(href="#heart_shape", fill="none", insert=(half_x, str(height-80)+"mm"), id="heart_wrapper"))

I prefer to delete them in the frontend using javascript. I got the bounding of my shape like below:

var heart = document.querySelector("#heart_wrapper")
var {xHeart, yHeart} = heart.getBBox()
Note: The thing I do not know exactly is that how to determine if a dot is inside my shape. I know how to select all of the dots and just remove them

Here is the generated svg shape:

<use xmlns="http://www.w3.org/2000/svg" fill="none" id="heart_wrapper" x="377.9527559055118" xlink:href="#heart_shape" xmlns:xlink="http://www.w3.org/1999/xlink" y="195mm">
<path d="M0 200 v-200 h200 a100,100 90 0,1 0,200 a100,100 90 0,1 -200,0z" id="heart_shape" opacity="1" style="rotate: 225deg;scale:1.9;stroke: #fff;"></path>
</use>


from Find elements in a specific area of svg using only getBoundingClientRect

Accessing duplicate feed tags using feedparser

I'm trying to parse this feed: https://feeds.podcastmirror.com/dudesanddadspodcast

The channel section has two entries for podcast:person

<podcast:person role="host" img="https://dudesanddadspodcast.com/files/2019/03/andy.jpg" href="https://www.podchaser.com/creators/andy-lehman-107aRuVQLA">Andy Lehman</podcast:person>
<podcast:person role="host" img="https://dudesanddadspodcast.com/files/2019/03/joel.jpg" href="https://www.podchaser.com/creators/joel-demott-107aRuVQLH" >Joel DeMott</podcast:person>

When parsed, feedparser only brings in one name

> import feedparser
> d = feedparser.parse('https://feeds.podcastmirror.com/dudesanddadspodcast')
> d.feed['podcast_person']
> {'role': 'host', 'img': 'https://dudesanddadspodcast.com/files/2019/03/joel.jpg', 'href': 'https://www.podchaser.com/creators/joel-demott-107aRuVQLH'}

What would I change so it would instead show a list for podcast_person so I could loop through each one?



from Accessing duplicate feed tags using feedparser

How to bypass the alert window while downloading a zipfile?

If I open the link: https://www.x.com/ca.zip

This link shows the window and I need to press the OK button and it downloads the file.

The alert is not from the browser, it is from the page itself.

But When I tried the script:

from io import BytesIO
from zipfile import ZipFile
import requests


def get_zip(file_url):
    url = requests.get(file_url)
    zipfile = ZipFile(BytesIO(url.content))
    zipfile.extractall("")

file_link = 'https://www.x.com/ca.zip'

get_zip(file_link)

This throws the error:

zipfile.BadZipFile: File is not a zip file

And when I tried:

import requests

url = r'https://www.x.com/ca.zip'
output = r'downloadedfile.zip'

r = requests.get(url)
with open(output, 'wb') as f:
    f.write(r.content)

This downloads the content of the page showing the OK button. Any idea how to solve this:, the link downloads the zip file.



from How to bypass the alert window while downloading a zipfile?

How to associate point on a curve with points in an array of objects?

I have a bunch of names from the web (first name, last name, of people in different countries). Some of the countries have statistics on how many people have each last name, as shown in some places like here.

Well, that Japanese surname list only lists the top 100. I have other lists like for Vietnamese listing the top 20, and other lists the top 50 or 1000 even in some places. But I have real name lists that are up to the 1000+ count. So I might have 2000 Japanese surnames, with only 100 that have listed the actual count of people with that surname.

What I would like to do is built a "faker" sort of library, that generates realistic names based on these statistics. I know how to pick a random element from a weighted array in JavaScript, so once the "weights" (number of people with that name) are included for each name, it is just a matter of plugging it into that algorithm.

My question is, how can I "complete the curve" on the names that don't have a weight on them? That is, say we have an exponential-like curve sort of, from the 20 or 100 names that have weights on them. I would then like to randomly pick names from the remaining unweighted list, and give them a value that places them somewhat realistically in the remaining tail of the curve. How can that be done?

For example, here is a list of Vietnamese names with weights:

Nguyen,38
Tran,11
Le,9.5
Pham,7.1
Huynh,5.1
Phan,4.5
Vu,3.9
Đang,2.1
Bui,2
Do,1.4
Ho,1.3
Ngo,1.3
Duong,1
Ly,0.5

And here is a list without weights:

An
Ân
Bạch
Bành
Bao
Biên
Biện
Cam
Cảnh
Cảnh
Cao
Cái
Cát
Chân
Châu
Chiêm
Chu
Chung
Chử
Cổ
Cù
Cung
Cung
Củng
Cừu
Dịch
Diệp
Doãn
Dũ
Dung
Dư
Dữu
Đái
Đàm
Đào
Đậu
Điền
Đinh
Đoàn
Đồ
Đồng
Đổng
Đường
Giả
Giải
Gia
Giản
Giang
Giáp
Hà
Hạ
Hậ
Hác
Hàn
Hầu
Hình
Hoa
Hoắc
Hoạn
Hồng
Hứa
Hướng
Hy
Kha
Khâu
Khổng
Khuất
Kiều
Kim
Kỳ
Kỷ
La
Lạc
Lai
Lam
Lăng
Lãnh
Lâm
Lận
Lệ
Liên
Liêu
Liễu
Long
Lôi
Lục
Lư
Lữ
Lương
Lưu
Mã
Mạc
Mạch
Mai
Mạnh
Mao
Mẫn
Miêu
Minh
Mông
Ngân
Nghê
Nghiêm
Ngư
Ngưu
Nhạc
Nhan
Nhâm
Nhiếp
Nhiều
Nhung
Ninh
Nông
Ôn
Ổn
Ông
Phí
Phó
Phong
Phòng
Phù
Phùng
Phương
Quách
Quan
Quản
Quang
Quảng
Quế
Quyền
Sài
Sầm
Sử
Tạ
Tào
Tăng
Tân
Tần
Tất
Tề
Thạch
Thai
Thái
Thang
Thành
Thảo
Thân
Thi
Thích
Thiện
Thiệu
Thôi
Thủy
Thư
Thường
Tiền
Tiết
Tiêu
Tiêu
Tô
Tôn
Tôn
Tông
Tống
Trác
Trạch
Trại
Trang
Trầm
Trâu
Trì
Triệu
Trịnh
Trương
Từ
Tư
Tưởng
Úc
Ứng
Vạn
Văn
Vân
Vi
Vĩnh
Vũ
Vũ
Vương
Vưu
Xà
Xầm
Xế
Yên

I would like to randomize the list without weights (easy to do), and then assign each one a weight so it fills out the tail of the curve to some degree, so it feels somewhat realistic. How can this be done? Basically it seems we need to get the "curvature" of the initial weighted curve, and then somehow extend that with new items. It doesn't need to be perfect, but whatever can be done to approximate would be cool. I am not a statistics/math person so I don't really know where to begin on this one.

I don't have an exact outcome I want, I just want something that will generate the tail of the curve to some degree. For example, the start of the list might look like this:

An,0.5
Ân,0.45
Bạch,0.42
Bành,0.40
Bao,0.39
...

To try and visually demonstrate what I'm going after, the black boxes below are the provided data. The dotted boxes would stretch on for a long while, but here I show the start of it. The dotted boxes are what we would fill in in the curve so it fit the shape of the start of the curve.

▐
▐
▐▐
▐▐
▐▐
▐▐▐
▐▐▐ 
▐▐▐▐ 
▐▐▐▐▐▐ 
▐▐▐▐▐▐▐▐▐▐
▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐
▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐░░░░
▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐░░░░░░░░░░░░░░░░░░░░░
▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐▐░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░

So basically, the left side of the curve are the few highest values. As it goes to the right, it gets smaller according to "some" pattern. We just need to roughly continue the pattern to the right, so it basically extends the curve.



from How to associate point on a curve with points in an array of objects?

How to get Accordion item to change size with flex box?

I have a two charts on my page. Together, they take up the entire screen height through Flex (a css flexbox). The first chart is collapsible via an Accordion.

How we can make the first chart fill up 30% of the screen height, and the second chart fill up the remaining ~70%?

I was able to do this successfully when these two charts are the only things on the page:

enter image description here

// This works well.
<Flex direction="column" height="100vh">
  <Accordion allowToggle>
    <AccordionItem>
      <h2>
        <AccordionButton
          h={0}
          borderRadius="md"
          borderWidth="0px"
          _focus=
        >
          <Box
            textAlign="left"
            h={3}
            _focus=
          ></Box>
          <AccordionIcon />
        </AccordionButton>
      </h2>
      <AccordionPanel p="0">
        <Box height="30vh">
          <ThreeDataPoint />
        </Box>
      </AccordionPanel>
    </AccordionItem>
  </Accordion>
  <Box flex="1">
    <ThreeDataPoint />
  </Box>
</Flex>

However, if I combine a row and column flex box together, it doesn't work. The second chart overflows the screen's height.

Here's the CODESANDBOX

And here's a screenshot:

enter image description here



from How to get Accordion item to change size with flex box?

Thursday 29 July 2021

How can convert a functional model into a sequential model?

Take a look at this question.

I am trying to convert this functional model into a sequential model.

Here is the full source code in Repl.it.

The following is the main-section of the source code:

# <editor-fold desc="def create_model()">
def create_model(n_hidden_1, n_hidden_2, num_classes, num_features):
    model = Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=(num_features,)))
    model.add(tf.keras.layers.Dense(n_hidden_1, activation='sigmoid'))
    model.add(tf.keras.layers.Dense(n_hidden_2, activation='sigmoid'))
    model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
    return model
# </editor-fold>

if __name__ == "__main__":
    len_int = len(sys.argv)
    arg_str = None

    if len_int > 1:
        arg_str = sys.argv[1]
    else:
        arg_str = os.path.join(INPUT_PATH, INPUT_DATA_FILE)
    # END of if len_int > 1:

    # load training data from the disk
    train_x, _, train_z, validate_x, _, validate_z = load_data_k(
        os.path.join(INPUT_PATH, INPUT_DATA_FILE),
        class_index=CLASS_INDEX,
        feature_start_index=FEATURE_START_INDEX,
        top_n_lines=NO_OF_INPUT_LINES,
        validation_part=VALIDATION_PART
    )

    # create Stochastic Gradient Descent optimizer for the NN model
    opt_function = keras.optimizers.SGD(
        learning_rate=LEARNING_RATE
    )
    # create a sequential NN model
    model = create_model(
        LAYER_1_NEURON_COUNT,
        LAYER_2_NEURON_COUNT,
        CLASSES_COUNT,
        FEATURES_COUNT
    )
    #
    model.compile(loss=['categorical_crossentropy'] * 5,
                  optimizer=opt_function,
                  metrics=[['accuracy']] * 5)
    # END of if model == None:

    # run training and validation
    history = model.fit(
        train_x, tf.split(train_z, 5, axis=1),
        epochs=EPOCHS,
        batch_size=BATCH_SIZE,
        shuffle=True,
        verbose=2
    )

    print(history.history.keys())

    # save the entire NN in HDF5 format
    model.save(os.path.join(OUTPUT_PATH, MODEL_FILE))

However, this source code is generating the following error:

C:\ProgramData\Miniconda3\envs\by_nn\python.exe C:/Users/pc/source/repos/by_nn/SCRIPTS/model_k_sequential_model.py
GPU not found!
Epoch 1/1000
Traceback (most recent call last):
  File "C:/Users/pc/source/repos/by_nn/SCRIPTS/model_k_sequential_model.py", line 180, in <module>
    history = model.fit(
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\def_function.py", line 823, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\def_function.py", line 696, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\function.py", line 2855, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\function.py", line 3213, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\function.py", line 3065, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\framework\func_graph.py", line 986, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\eager\def_function.py", line 600, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\framework\func_graph.py", line 973, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\training.py:806 train_function  *
        return step_function(self, iterator)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\training.py:796 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1211 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2585 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2945 _call_for_each_replica
        return fn(*args, **kwargs)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\training.py:789 run_step  **
        outputs = model.train_step(data)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\training.py:748 train_step
        loss = self.compiled_loss(
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\engine\compile_utils.py:204 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\losses.py:149 __call__
        losses = ag_call(y_true, y_pred)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\losses.py:253 call  **
        return ag_fn(y_true, y_pred, **self._fn_kwargs)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
        return target(*args, **kwargs)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\losses.py:1535 categorical_crossentropy
        return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
        return target(*args, **kwargs)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\keras\backend.py:4687 categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)
    C:\ProgramData\Miniconda3\envs\by_nn\lib\site-packages\tensorflow\python\framework\tensor_shape.py:1134 assert_is_compatible_with
        raise ValueError("Shapes %s and %s are incompatible" % (self, other))

    ValueError: Shapes (10, 3) and (10, 15) are incompatible


Process finished with exit code 1

How can I solve this issue?


Edit: I took Swaroop Bhandary's source code, and modified that as follows (check Repl.it code-base):

# custom loss to take into the dependency between the 3 bits
def loss(y_true, y_pred):
    l1 = tf.nn.softmax_cross_entropy_with_logits(y_true[:, :3], y_pred[:, :3])
    l2 = tf.nn.softmax_cross_entropy_with_logits(y_true[:, 3:6], y_pred[:, 3:6])
    l3 = tf.nn.softmax_cross_entropy_with_logits(y_true[:, 6:9], y_pred[:, 6:9])
    l4 = tf.nn.softmax_cross_entropy_with_logits(y_true[:, 9:12], y_pred[:, 9:12])
    l5 = tf.nn.softmax_cross_entropy_with_logits(y_true[:, 12:], y_pred[:, 12:])
    return l1 + l2 + l3 + l4 + l5


if __name__ == "__main__":
    len_int = len(sys.argv)
    arg_str = None

    if len_int > 1:
        arg_str = sys.argv[1]
    else:
        arg_str = os.path.join(INPUT_PATH, INPUT_DATA_FILE)
    # END of if len_int > 1:

    # load training data from the disk
    train_x, train_y, train_z, validate_x,validate_y, validate_z = load_data_k(
        os.path.join(INPUT_PATH, INPUT_DATA_FILE),
        class_index=CLASS_INDEX,
        feature_start_index=FEATURE_START_INDEX,
        top_n_lines=NO_OF_INPUT_LINES,
        validation_part=VALIDATION_PART
    )

    #print(train_y)
    print("z = " + str(train_z))

    # create Stochastic Gradient Descent optimizer for the NN model
    opt_function = keras.optimizers.Adam(
        learning_rate=LEARNING_RATE
    )
    # create a sequential NN model
    model = create_model(
        LAYER_1_NEURON_COUNT,
        LAYER_2_NEURON_COUNT,
        OUTPUTS_COUNT,
        FEATURES_COUNT
    )
    #
    model.compile(optimizer=opt_function, loss=loss, metrics=['accuracy'])
    model.fit(train_x, train_z, epochs=EPOCHS,batch_size=BATCH_SIZE)

The network runs, but doesn't train.

Also, I see the following warning:

WARNING:tensorflow:AutoGraph could not transform <function loss at 0x000001F571B4F820> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert


from How can convert a functional model into a sequential model?

How to make 5.1 dolby digital audio work in chromecast from exoplayer

I hope someone can help me with this problem. When I use the shakaplayer demo (https://v2-4-7-dot-shaka-player-demo.appspot.com/demo/#build=uncompiled), and in Configuration-> Preferred audio channel count: (I put the option in 6 channels) Dolby 5.1 sounds on the chromecast here is an example of my m3u8 files.

When I cast from exoplayer it only sounds 2.0 AAC, How can I make 5.1 sound sound on chromecast using exoplayer?

#EXTM3U
## Generated with https://github.com/google/shaka-packager version v2.3.0-5bf8ad5ed5-release

#EXT-X-MEDIA:TYPE=AUDIO,URI="audio-eng-2/main.m3u8",GROUP-ID="audio_aac",LANGUAGE="en",NAME="ENGLISH",AUTOSELECT=YES,CHANNELS="2"
#EXT-X-MEDIA:TYPE=AUDIO,URI="audio-spa-2/main.m3u8",GROUP-ID="audio_aac",LANGUAGE="es",NAME="SPANISH",DEFAULT=YES,AUTOSELECT=YES,CHANNELS="2"
#EXT-X-MEDIA:TYPE=AUDIO,URI="a-eng-ac3/main.m3u8",GROUP-ID="audio_ac3",LANGUAGE="en",NAME="ENGLISH-DD",AUTOSELECT=YES,CHANNELS="6"
#EXT-X-MEDIA:TYPE=AUDIO,URI="a-spa-ac3/main.m3u8",GROUP-ID="audio_ac3",LANGUAGE="es",NAME="SPANISH-DD",DEFAULT=YES,AUTOSELECT=YES,CHANNELS="6"

#EXT-X-STREAM-INF:BANDWIDTH=818330,AVERAGE-BANDWIDTH=739176,CODECS="avc1.64002a,mp4a.40.2",RESOLUTION=640x268,AUDIO="audio_aac"
h264_360p/main.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=5971603,AVERAGE-BANDWIDTH=5449558,CODECS="avc1.64002a,mp4a.40.2",RESOLUTION=1920x800,AUDIO="audio_aac"
h264_1080p/main.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1226912,AVERAGE-BANDWIDTH=1106481,CODECS="avc1.64002a,mp4a.40.2",RESOLUTION=842x352,AUDIO="audio_aac"
h264_480p/main.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=3245473,AVERAGE-BANDWIDTH=2816564,CODECS="avc1.64002a,mp4a.40.2",RESOLUTION=1279x534,AUDIO="audio_aac"
h264_720p/main.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=1063700,AVERAGE-BANDWIDTH=992710,CODECS="avc1.64002a,ac-3",RESOLUTION=640x268,AUDIO="audio_ac3"
h264_360p/main.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=6216973,AVERAGE-BANDWIDTH=5703092,CODECS="avc1.64002a,ac-3",RESOLUTION=1920x800,AUDIO="audio_ac3"
h264_1080p/main.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1472282,AVERAGE-BANDWIDTH=1360015,CODECS="avc1.64002a,ac-3",RESOLUTION=842x352,AUDIO="audio_ac3"
h264_480p/main.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=3490843,AVERAGE-BANDWIDTH=3070098,CODECS="avc1.64002a,ac-3",RESOLUTION=1279x534,AUDIO="audio_ac3"
h264_720p/main.m3u8

#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=96463,AVERAGE-BANDWIDTH=23629,CODECS="avc1.64002a",RESOLUTION=640x268,URI="h264_360p/iframe.m3u8"
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=576190,AVERAGE-BANDWIDTH=130909,CODECS="avc1.64002a",RESOLUTION=1920x800,URI="h264_1080p/iframe.m3u8"
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=150798,AVERAGE-BANDWIDTH=35289,CODECS="avc1.64002a",RESOLUTION=842x352,URI="h264_480p/iframe.m3u8"
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=344443,AVERAGE-BANDWIDTH=72033,CODECS="avc1.64002a",RESOLUTION=1279x534,URI="h264_720p/iframe.m3u8"


from How to make 5.1 dolby digital audio work in chromecast from exoplayer

Inactive internal testing track is tagged as latest version by in-app updates. How to stop this?

I implemented in-app updating in Android, and to do so I added an internal testing track to make sure the update process worked as intented. All worked.

I set this internal release version number to be really high to distinguish it from production. For example, production version is 1.X.X, internal testing is 104.

I no longer need this internal testing track so I made it inactive and removed the testers (i.e. me).

However, the in-app update information still shows availableVersionCode=104. If I accept the update in-app, nothing actually downloads whereas it did when active.

How can I either remove this version entirely from internal testing, or stop in-app updates fetching this version?

I understand a last resort is to up production version to > 104 but I really do not want that.

Edit: It seems after a few days, the app no longer pulled in this testing track (but it did not stop immediately). I would still like to know how to delete internal releases.



from Inactive internal testing track is tagged as latest version by in-app updates. How to stop this?

Let DialogFragment in navigation not disappear

I have FragmentA, FragmentB and DialogFragment(BottomDialogFragment). I I abbreviated them as A,B and D

D will be shown after the button in A is clicked. It means A -> D

B will be shown after the button in D is clicked. It means D -> B

I config them in navigation.xml

<fragment
        android:id="@+id/A"
        android:name="com.example.A">

    <action
        android:id="@+id/A_D"
        app:destination="@id/D" />
</fragment>



<dialog
        android:id="@+id/D"
        android:name="com.example.D">

    <action
        android:id="@+id/D_B"
        app:destination="@id/B" />
</dialog>


<fragment
        android:id="@+id/B"
        android:name="com.example.B">
</fragment>

Now when I click the button in A, the fragment will jump to D.

Then I click the button in D, the fragment will jump to B.

But when I pop the navigation stack in B, it will back to A, and the D doesn't show.

What should I do? I want the D still exists on the surface of A.



from Let DialogFragment in navigation not disappear

Calling queue variable processed outside of __init__ getting Empty values in python

I have made a python script to receive live ticks and put it inside a queue for further processing but my problem is that as I have defined the queue variable in the class __init__ method and putting the received ticks by calling another function inside the same class but when calling it from another function it gets the variable from __init__ and not directly from other function where I put the values into the queue it is getting queue.empty error. Edit: "If you have any suggestion to improve my question for a better understanding you are welcome."

My code:

main.py:

from stream import StreamingForexPrices as SF
from threading import Thread, Event
import time
from queue import Queue, Empty

def fetch_data(data_q):
    while True:
        time.sleep(10)
        # data_q.put("checking")
        data = data_q.get(False)
        print(data)

def start():
    events = Queue()

    fetch_thread = Thread(target=fetch_data, args=(events,))
    fetch_thread.daemon = True
    fetch_thread.start()

    prices = SF(events)
    wst = Thread(target=prices.conn)
    wst.daemon = True
    wst.start()
    while not prices.ws.sock.connected:
        time.sleep(1)
        print("checking1111111")
    while prices.ws.sock is not None:
        print("checking2222222")
        time.sleep(10)
if __name__ == "__main__":
    start()

stream.py:

from __future__ import print_function
from datetime import datetime
import json, websocket, time
from event import TickEvent

class StreamingForexPrices(object):

    def __init__(
        self, events_queue
    ):
        self.events_queue = events_queue
        # self.conn()

    def conn(self):
        self.socket = f'wss://stream.binance.com:9443/ws/btcusdt@ticker/ethbtc@ticker/bnbbtc@ticker/wavesbtc@ticker/stratbtc@ticker/ethup@ticker/yfiup@ticker/xrpup@ticker'
        websocket.enableTrace(False)
        self.ws = websocket.WebSocketApp(
            self.socket, on_message=self.on_message, on_close=self.on_close)
        self.ws.run_forever()
   
    def on_close(self, ws, message):
        print("bang")

    def on_message(self, ws, message):
        data = json.loads(message)
        timestamp = datetime.utcfromtimestamp(data['E']/1000).strftime('%Y-%m-%d %H:%M:%S')
        instrument = data['s']
        open = data['o']
        high = data['h']
        low = data['l']
        close = data['c']
        volume = data['v']
        trade = data['n']
        tev = TickEvent(instrument, timestamp, open, high, low, close, volume, trade)
        self.events_queue.put(tev)

There is also a similar question related to this issue in this link but i am not able to figure out how to resolve this issue with a queue variable.

Event.py:

class Event(object):
    pass


class TickEvent(Event):
    def __init__(self, instrument, time, open, high, low, close, volume, trade):
        self.type = 'TICK'
        self.instrument = instrument
        self.time = time
        self.open = open
        self.high = high
        self.low = low
        self.close = close
        self.high = high
        self.volume = volume
        self.trade = trade
        # print(self.type, self.instrument, self.open, self.close, self.high)

    def __str__(self):
        return "Type: %s, Instrument: %s, Time: %s, open: %s, high: %s, low: %s, close: %s, volume: %s, trade: %s" % (
            str(self.type), str(self.instrument),
            str(self.time), str(self.open), str(self.high),
            str(self.low), str(self.close), str(self.volume),
            str(self.trade)
        )

    def __repr__(self):
        return str(self)


from Calling queue variable processed outside of __init__ getting Empty values in python

django channels asyncio trouble running task in order

I'm struggling to get some code to work using asyncio. I'm very new to it. So new to it, that I don't know how to properly figure out what I'm doing wrong.

I am using Django Channels to run an AyncJsonWebsocketConsumer which I connect to via websockets from the client application. I used websockets because I need bidirectional communication. I'm creating a printing process where I start a series of long running actions, but I need the ability to pause, stop, etc. I had this all working when I was using asyncio.sleep(x) to mock my long running task (the print_layer method). When I tried adding my RPC to a queue inplace of the asyncio.sleep it stopped working as expected.

class Command(Enum):
    START = 'start_print'
    PAUSE = 'pause_print'
    STOP = 'stop_print'
    CANCEL = 'cancel_print'
    RESET = 'reset_print'

class State(Enum):
    NOT_STARTED = 0
    STARTING = 1
    RUNNING = 2
    PAUSED = 3
    STOPPED = 4
    COMPLETE = 5
    ERROR = 500

class PrintConsumer(AsyncJsonWebsocketConsumer):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.status = State.NOT_STARTED
        self.publisher = Publisher()
        self.print_instruction = None
        self.current_task = None
        self.loop = asyncio.get_event_loop()
        self.loop.set_debug(enabled=True)

    @property
    def running_task(self):
        return self.current_task is not None and not self.current_task.done() and not self.current_task.cancelled()

    async def connect(self):
        await self.accept()

    async def disconnect(self, close_code):
        self.status = State.STOPPED

    async def receive(self, text_data):
        response = json.loads(text_data)
        event_cmd = response.get('event', None)
        
        if Command(event_cmd) == Command.START:
            if self.status == State.NOT_STARTED:
                sting_uuid = response.get('sting_uuid', None)
                try:
                    await asyncio.wait_for(self.initialize_print(sting_uuid), timeout=10)  # WARNING: this is blocking
                except asyncio.TimeoutError:
                    self.status = State.ERROR
            self.status = State.RUNNING
            if not self.running_task:
                # it would only have a running task in this situation already if someone starts/stops it quickly
                self.current_task = asyncio.create_task(self.resume_print())
        elif Command(event_cmd) == Command.PAUSE:
            self.status = State.PAUSED
        elif Command(event_cmd) == Command.STOP:
            if self.running_task:
                self.current_task.cancel()
        elif Command(event_cmd) == Command.RESET:
            self.status = State.NOT_STARTED
            self.print_instruction = None
            if self.running_task:
                self.current_task.cancel()
            self.current_task = None
            await self.send(json.dumps({ 'message': []}))
        
    async def initialize_print(self, uuid):
        stingfile = await get_file(uuid)
        # This is just an iterator that returns the next task to do
        # hence I use "next" in the resume_print method
        self.print_instruction = StingInstruction(stingfile)


    async def resume_print(self):
        try:
            while self.status == State.RUNNING:
                await self.send(json.dumps({ 'message': self.print_instruction.serialized_action_status}))
                await asyncio.sleep(.2) # It works with this here only
                try:
                    action = next(self.print_instruction)  # step through iterator
                except StopIteration:
                    self.status = State.COMPLETE
                    break;

                # can't seem to get this part to work
                await self.print_layer(action)
        except asyncio.CancelledError:
            # TODO: Add logic here that will send a command out to stop pumps and all motors.
            self.status = State.STOPPED
    
    async def print_layer(self, instruction):
        print_command = instruction['instruction']
        # this publishes using RPC, so it holds until I get a response.
        try:
            await asyncio.wait_for(self.publisher.publish('pumps', json.dumps(print_command)), timeout=10)
        except asyncio.TimeoutError:
            self.status = State.ERROR
        # when I used just this in the function, the code worked as expected
        # await asyncio.sleep(1)


I don't know where to begin when showing what I've tried... My "best" attempt, as I see it, was to turn the print_layer method into a thread so that it did not block execution using asyncio.to_thread(print_layer).. but in many of the things I tried, it would not even execute.

The self.print_instruction.serialized_action_status returns the status of each step. My goal is to have it sending this before each long running task. This might look like...

# sending status update for each step to client
# running print_layer for first action
# sending status update for each step to client
# running print_layer for second action
...
# sending final update

Instead, I'm creating every single task at once, and it's sending the updates all at the end when I add the long running task, or a number of issues. I can get the long running task to run in order (seemingly), but the send won't actually send inbetween layer prints. I'd really appreciate some help.. thank you in advance.

Here is some simplified relevant code (doesn't handle connection loss, etc) for my publisher...

class Publisher():
    def on_response(self, ch, method, props, body):
        """when job response set job to inactive"""

    async def publish(routing_key, msg):
        new_corr_id = str(uuid4())
        self.active_jobs[new_corr_id] = False
        self.channel.basic_publish(...)
        white not self.active_jobs[new_corr_id]:
            self._connection.process_data_events()
            sleep(.1)

I found a partial working hack.. if I add await asyncio.sleep(.1) after my send command (i.e. like this)

await self.send(json.dumps({ 'message': self.print_instruction.serialized_action_status}))
await asyncio.sleep(.2)

then it appears to work how I want it to (minus ability to interrupt), and I'm able to still pause/start my process. Obviously I'd rather do this without a hack. Why does this code all of the sudden work where the status updates send out as expected after the .2 asyncio sleep and not without? I also can not interrupt with a STOP command, which I don't understand. I would have expected the django channel to read the stop command, and then cancel the task that was running and force the asyncio.CancelledError in the resume_print method.



from django channels asyncio trouble running task in order

How can I change the whiteBalance gains value in android?

I would like to get the value of the current gains and change the value of the RGB gains.

In iOS, Apple provides setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:completionHandler.

- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
  NSError *error = nil;
  
  if ( [self.captureDevice lockForConfiguration:&error] ) {
    AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains];
    [self.captureDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
    [self.captureDevice unlockForConfiguration];
  }
  else {
    NSLog( @"Could not lock device for configuration: %@", error );
  }
}

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) g
{
  AVCaptureWhiteBalanceGains gains = g;
  gains.redGain = MAX(gains.redGain, 1.0f);
  gains.greenGain = MAX(gains.greenGain, 3.0f);
  gains.blueGain = MAX(gains.blueGain, 18.0f);
  
  return gains;
}

How can we achieve this in android using cameraX?

COLOR_CORRECTION_GAINS

COLOR_CORRECTION_MODE

I have checked in the doc regarding channel control. But how can we change color correction and reset the cameraX preview with the new control?



from How can I change the whiteBalance gains value in android?

Calculate Z Rotation of a face in Tensorflow.js

Note: This question has NOTHING to do with Three.js, it's only Tensorflow.js and Trigonometry.

I am trying to rotate a 3D object in Three.js by rotating my face. I have used this code by akhirai560 for rotating in X and Y axis.:

function normal(vec) {
  let norm = 0;
  for (const v of vec) {
    norm += v * v;
  }
  return Math.sqrt(norm);
}

function getHeadAnglesCos(keypoints) {
  // Vertical (Y-Axis) Rotation
  const faceVerticalCentralPoint = [
    0,
    (keypoints[10][1] + keypoints[152][1]) * 0.5,
    (keypoints[10][2] + keypoints[152][2]) * 0.5,
  ];
  const verticalAdjacent = keypoints[10][2] - faceVerticalCentralPoint[2];
  const verticalOpposite = keypoints[10][1] - faceVerticalCentralPoint[1];
  const verticalHypotenuse = normal([verticalAdjacent, verticalOpposite]);
  const verticalCos = verticalAdjacent / verticalHypotenuse;

  // Horizontal (X-Axis) Rotation
  const faceHorizontalCentralPoint = [
    (keypoints[226][0] + keypoints[446][0]) * 0.5,
    0,
    (keypoints[226][2] + keypoints[446][2]) * 0.5,
  ];
  const horizontalAdjacent = keypoints[226][2] - faceHorizontalCentralPoint[2];
  const horizontalOpposite = keypoints[226][0] - faceHorizontalCentralPoint[0];
  const horizontalHypotenuse = normal([horizontalAdjacent, horizontalOpposite]);
  const horizontalCos = horizontalAdjacent / horizontalHypotenuse;

  return [horizontalCos, verticalCos];
}

It calculates the rotation by finding the cos of these points (original image source):

Vertical and Horizontal Landmarks

I also want to calculate the cos of Z axis rotation. Thanks!



from Calculate Z Rotation of a face in Tensorflow.js

Trouble with minimal hvp on pytorch model

While autograd's hvp tool seems to work very well for functions, once a model becomes involved, Hessian-vector products seem to go to 0. Some code.

First, I define the world's simplest model:

class SimpleMLP(nn.Module):
  def __init__(self, in_dim, out_dim):
      super().__init__()
      self.layers = nn.Sequential(
        nn.Linear(in_dim, out_dim),
      )
      
  def forward(self, x):
    '''Forward pass'''
    return self.layers(x)

Then, a loss function:

def objective(x):
  return torch.sum(0.25 * torch.sum(x)**4)

We instantiate it:

Arows = 2
Acols = 2

mlp = SimpleMLP(Arows, Acols)

Finally, I'm going to define a "forward" function (distinct from the model's forward function) that will serve as the the full model+loss that we want to analyze:

def forward(*params_list):
  for param_val, model_param in zip(params_list, mlp.parameters()):
    model_param.data = param_val
 
  x = torch.ones((Arows,))
  return objective(mlp(x))

This passes a ones vector into the single-layer "mlp," and passes it into our quadratic loss.

Now, I attempt to compute:

v = torch.ones((6,))
v_tensors = []
idx = 0
#this code "reshapes" the v vector as needed
for i, param in enumerate(mlp.parameters()):
  numel = param.numel()
  v_tensors.append(torch.reshape(torch.tensor(v[idx:idx+numel]), param.shape))
  idx += numel

And finally:

param_tensors = tuple(mlp.parameters())
reshaped_v = tuple(v_tensors)
soln =  torch.autograd.functional.hvp(forward, param_tensors, v=reshaped_v)

But, alas, the Hessian-Vector Product in soln is all 0's. What is happening?



from Trouble with minimal hvp on pytorch model

Is there a way for me to see how much volume an application is outputting?

I'm trying to make a computer application which will look at the volume that Spotify and discord are outputting, and balance it accordingly so that I can hear my friends, but when they're not talking, my music is louder.

This is for a windows 10 computer, I've used pycaw to get the master volume as well as modify the master volume; however, I could not find an option to get the current volume being outputted.

from __future__ import print_function
from pycaw.pycaw import AudioUtilities, ISimpleAudioVolume, IAudioEndpointVolume, IAudioEndpointVolumeCallback


def main():
    sessions = AudioUtilities.GetAllSessions()
    for session in sessions:
        volume = session._ctl.QueryInterface(ISimpleAudioVolume)
        if session.Process and session.Process.name() == "Discord.exe":
            print("volume.GetMasterVolume(): %s" % volume.GetMasterVolume())


if __name__ == "__main__":
    main()

By doing this, I can get the maximum volume for discord (for example 1.0). However, I want to get the level of audio discord is currently outputting (for example 0.3). What would I need to replace

volume.GetMasterVolume()

to achieve this? Thanks.



from Is there a way for me to see how much volume an application is outputting?

Joblib and other parallel tasks within Airflow

I've used Joblib and Airflow in the past and haven't run into this issue. I'm trying to run a job through Airflow that runs a parallel computation using Joblib. When the Airflow job starts up I see the following warning

UserWarning: Loky-backed parallel loops cannot be called in a multiprocessing, setting n_jobs=1

Tracing the warning back to source I see the following function triggering in the joblib package in the LokyBackend class (similar logic is also in the MultiprocessingBackend class)

def effective_n_jobs(self, n_jobs):
    """Determine the number of jobs which are going to run in parallel"""
    if n_jobs == 0:
        raise ValueError('n_jobs == 0 in Parallel has no meaning')
    elif mp is None or n_jobs is None:
        # multiprocessing is not available or disabled, fallback
        # to sequential mode
        return 1
    elif mp.current_process().daemon:
        # Daemonic processes cannot have children
        if n_jobs != 1:
            warnings.warn(
                'Loky-backed parallel loops cannot be called in a'
                ' multiprocessing, setting n_jobs=1',
                stacklevel=3)
        return 1

The issue is that I've run a similar function in Joblib and Airflow before and didn't trigger this condition to set n_jobs equal to 1. Wondering if this is some type of versioning issue (using Airflow 2.X and Joblib 1.X) or if there are settings in Airflow that can fix this. I looked at older versions of Joblib and even downgraded to Joblib 0.4.0 but that didn't solve any issues. I'm more hesitant to downgrade Airflow because of differences in the api, database connections etc.


Edit:

Here is the code I've been running in Airflow:

def test_parallel():
    out=joblib.Parallel(n_jobs=-1, backend="loky")(joblib.delayed(lambda a: a+1)(i) for i in range(20))

with DAG("test", default_args=DEFAULT_ARGS, schedule_interval="0 8 * * *",) as test:
    run_test = PythonOperator(
        task_id="test",
        python_callable=test_parallel,
    )

    run_test

And the output in the airflow logs:

[2021-07-27 10:41:29,890] {logging_mixin.py:104} WARNING - /data01/code/virtualenv/alpha/lib/python3.8/site-packages/joblib/parallel.py:733 UserWarning: Loky-backed parallel loops cannot be called in a multiprocessing, setting n_jobs=1

I launch airflow scheduler and airflow webserver via supervisor. However, even if I launch both airflow processes from the command line the issue still persists. It doesn't happen, however, when I just run the task via the airflow task api e.g. airflow tasks test run_test



from Joblib and other parallel tasks within Airflow