Tuesday, 24 January 2023

Can a non-privileged Linux native executable in Android communicate with a regular application using Binder?

In order to test and control my regular android application, I wrote a command line Linux test program and used adb shell to execute this test program.

I can send a broadcast or start an actvity to/from my android application indirectly by executing commands such as am via exec, but I can't directly establish a Binder connection between my android application like getService()/startActivityForResult()/bindService().

My Linux execuatable is also not a privileged program, so I should not be able to use ServiceManager to publish my services directly in the system.

Is there any way for me to establish a Binder connection with a regular application?



from Can a non-privileged Linux native executable in Android communicate with a regular application using Binder?

Object not getting updated when values are 0 in bulk operation

I'm parsing a CSV file and updating a MongoDB database based on its values. I just noticed that if the subelements of an object are zero or null the values are not getting updated. How can I solve this?

MongoDB values pre-execution:

{
...
  "AC": {
    "AC1": 3100,
    "AC2": 3100,
    "AC3": 5000,
    "AC4": 3100,
    "AC5": 3100,
    "AC6": 5000
  }
...
}

Now, the excel has been updated to the following values, so I try to update it in the MongoDB database.

    "AC1": 3100,
    "AC2": 3100,
    "AC3": 5000,
    "AC4": 0,
    "AC5": 0,
    "AC6": 0

But, after the bulk operation, I get nModified: 0 and no changes.

If the values are different than 0 (e.g. 1), it works and the values are updated successfully.

The code is the following:

    // ... (Adding other subobjects to set)
    // Subobject AC

    if (element.acs) {
        let i = 1;
        for (let ac in element.acs) {
            console.log("AC.AC" + i, element.acs[ac]);
            if (element.acs[ac]) set["AC.AC" + i] = element.acs[ac];
            i++;
        }
    }

    // Result of console.log seem good:
    // AC.AC1 3100
    // AC.AC2 3100
    // AC.AC3 5000
    // AC.AC4 0
    // AC.AC5 0
    // AC.AC6 0

    bulk
        .find({ id: element.id })
        .upsert()
        .update({ $set: set });

    // After this, no update is done. Values in BBDD for that object:
    // AC.AC1 3100
    // AC.AC2 3100
    // AC.AC3 5000
    // AC.AC4 3100
    // AC.AC5 3100
    // AC.AC6 5000


from Object not getting updated when values are 0 in bulk operation

parse xlsx file having merged cells using python or pyspark

I want to parse an xlsx file. Some of the cells in the file are merged and working as a header for the underneath values.
But do not know what approach I should select to parse the file.

  1. Shall I parse the file from xlsx to json format and then I should perform the pivoting or transformation of dataset. OR
  2. Shall proceed just by xlsx format and try to read the specific cell values- but I believe this approach will not make the code scalable and dynamic.

I tried to parse the file and tried to convert to json but it did not load the all the records. unfortunately, it is not throwing any exception.


from json import dumps
from xlrd import open_workbook

# load excel file
wb = open_workbook('/dbfs/FileStore/tables/filename.xlsx')

# get sheet by using sheet name
sheet = wb.sheet_by_name('Input Format')

# get total rows
total_rows = sheet.nrows

# get total columns
total_columns = sheet.ncols

# convert each row of sheet name in Dictionary and append to list
lst = []
for i in range(0, total_rows):
    row = {}
    for j in range(0, total_columns):
        if i + 1 < total_rows:
            column_name = sheet.cell(rowx=0, colx=j)
            row_data = sheet.cell_value(rowx=i+1, colx=j)

            row.update(
                {
                    column_name.value: row_data
                }
            )

    if len(row):
        lst.append(row)


# convert into json
json_data = dumps(lst)
print(json_data)

After executing the above code I received following type of output:

  {
    "Analysis": "M000000000000002001900000000000001562761",
    "KPI": "FELIX PARTY.MIX",
    "": 2.9969042460942
  },
  {
    "Analysis": "M000000000000002001900000000000001562761",
    "KPI": "FRISKIES ESTERILIZADOS",
    "": 2.0046260994622
  },

Once the data will be in good shape then spark-databricks should be used for the transformation.
I tried multiple approaches but failed :( Hence seeking help from the community.

For more clarity on the question I have added sample input/output screenshot as following. Input dataset: enter image description here

Expected Output1:
enter image description here

You can download the actual dataset and expected output from the following link Dataset



from parse xlsx file having merged cells using python or pyspark