Thursday, 31 October 2019

How to profile React Native source code using Xcode/Instruments/Time Profiler

We're using React Native 0.59.10 and React-Redux 5.0.7, and are experiencing a CPU-bound performance issue, in which our Redux actions are taking ~0.25s to complete.

We've profiled using the Time Profiler configuration in Instruments, but none of our JS code is symbolicated.

Remotely debugging in Chrome seems to just debug the "remote inspector" page, which is entirely unhelpful.

Is there a way to build/attach a source map, or symbolicate the memory addresses seen below, to the JS function names/calls?

Instruments Callstack



from How to profile React Native source code using Xcode/Instruments/Time Profiler

Click event not firing on touchscreen when finger moves a bit

Click event is working fine when using mouse with computer. Even when I put mouse button down on button move cursor and then release mouse button inside button area, click event is firing. But same with touchscreen it is not working. I know that reason is that in touchscreen that kind of dragging is considered as scrolling. Click event is fired when I don't move finger too much on button. So only down and up without moving. My client has problem that they are moving finger too much and it is too hard to get click event. Is it possible to set bigger threshold for how much finger can move that it is still considered as click and not scroll?

I found this article where touch events are handled byself and translated them to click event. http://phonegap-tips.com/articles/fast-touch-event-handling-eliminate-click-delay.html I would not to like to go this road.

Have you any suggestion how can I solve this?

Here is more detail about touch events https://developer.mozilla.org/en-US/docs/Web/API/Touch_events Look at Handling clicks there is described how click is working in touchscreens. Still I didn't managed to work. Few months ago I but evt.preventDefault(); to my touchmove event handler and it did fix problem but currently it seems not.



from Click event not firing on touchscreen when finger moves a bit

Pylint failing to load plugin on a mercurial precommit hook

I am trying to create a mercurial pre-commit hook that runs pylint on the pre-commit. My project uses a virtual environment.

I have the hook set up to call pylint on the changed files but I get the error:

Traceback (most recent call last):
  File "/home/barmstrong/.virtualenvs/amp/bin/pylint", line 10, in <module>
    sys.exit(run_pylint())
  File "/home/barmstrong/.virtualenvs/amp/lib/python3.6/site-packages/pylint/__init__.py", line 20, in run_pylint
    Run(sys.argv[1:])
  File "/home/barmstrong/.virtualenvs/amp/lib/python3.6/site-packages/pylint/lint.py", line 1583, in __init__
    linter.load_plugin_modules(plugins)
  File "/home/barmstrong/.virtualenvs/amp/lib/python3.6/site-packages/pylint/lint.py", line 636, in load_plugin_modules
    module = modutils.load_module_from_name(modname)
  File "/home/barmstrong/.virtualenvs/amp/lib/python3.6/site-packages/astroid/modutils.py", line 202, in load_module_from_name
    return load_module_from_modpath(dotted_name.split("."), path, use_sys)
  File "/home/barmstrong/.virtualenvs/amp/lib/python3.6/site-packages/astroid/modutils.py", line 244, in load_module_from_modpath
    mp_file, mp_filename, mp_desc = imp.find_module(part, path)
  File "/usr/lib/python3.6/imp.py", line 297, in find_module
    raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'common'

I believe this is due to a custom plugin in the .pylintrc file that it tries to load from my project directory in:

'/common/blah/file.py'

And in the .pylintrc it is referenced by:

common.blah.file

I try to add this to the PYTHONPATH running:

sys.path.append('path/common')

But the error persists. How do I solve this so it can load my plugin? (I have also tried variations of adding the common module to the PYTHONPATH with no success).

EDIT: If I remove the common.blah/file.py file from my .pylintrc it works, so I need to figure out how I can import it. I have tried adding 'common' to the PYTHONPATH but it does not seem to work.



from Pylint failing to load plugin on a mercurial precommit hook

Optimization in bitmap creation

I'm writing an application that renders a sequence of pictures received in real-time from a remote TCP connection into an ImageView element. The stream is composed of single frames encoded in PGM format and sent at 9Hz I tought that a very low frame rate like this should be handled easily using a background Service that sends fully decoded bitmap to my MainActivity.

Here's my VideoService (I'm posting just run() method since I think it's the only one of some interest):

    public void run() {
        InetAddress serverAddr = null;

        try {
            serverAddr = InetAddress.getByName(VIDEO_SERVER_ADDR);
        } catch (UnknownHostException e) {
            Log.e(getClass().getName(), e.getMessage());
            e.printStackTrace();
            return;
        }

        Socket socket = null;
        BufferedReader reader = null;

        do {
            try {
                socket = new Socket(serverAddr, VIDEO_SERVER_PORT);

                reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));

                boolean frameStart = false;

                LinkedList<String> frameList = new LinkedList<>();

                while (keepRunning) {
                    final String message = reader.readLine();

                    if (!frameStart && message.startsWith("F"))
                        frameStart = true;
                    else if (frameStart && message.startsWith("EF")) {
                        frameStart = false;

                        final Bitmap bitmap = Bitmap.createBitmap(IR_FRAME_WIDTH, IR_FRAME_HEIGHT, Bitmap.Config.ARGB_8888);
                        final Canvas canvas = new Canvas(bitmap);

                        final String[] data = frameList.toArray(new String[frameList.size()]);

                        canvas.drawBitmap(bitmap, 0, 0, null);

                        //Log.d(this.getClass().getName(), "IR FRAME COLLECTED");

                        if ((data.length - 6) == IR_FRAME_HEIGHT) {
                            float grayScaleRatio = Float.parseFloat(data[2].trim()) / 255.0f;

                            for (int y = 0; y < IR_FRAME_HEIGHT; y++) {
                                final String line = data[y + 3];
                                final String[] points = line.split("\\s+");

                                if (points.length == IR_FRAME_WIDTH) {
                                    for (int x = 0; x < IR_FRAME_WIDTH; x++) {
                                        final float grayLevel = Float.parseFloat(points[x]) / grayScaleRatio;

                                        Paint paint = new Paint();

                                        paint.setStyle(Paint.Style.FILL);

                                        final int level = (int)grayLevel;

                                        paint.setColor(Color.rgb(level, level, level));

                                        canvas.drawPoint(x, y, paint);
                                    }
                                } else
                                    Log.d(this.getClass().getName(), "Malformed line");
                            }

                            final Intent messageIntent = new Intent();

                            messageIntent.setAction(VIDEO_BROADCAST_KEY);

                            ByteArrayOutputStream stream = new ByteArrayOutputStream();

                            bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
                            bitmap.recycle();
                            messageIntent.putExtra(VIDEO_MESSAGE_KEY, stream.toByteArray());
                            stream.close();
                            sendBroadcast(messageIntent);
                        } else
                            Log.d(this.getClass().getName(), "Malformed data");

                        frameList.clear();
                    } else if (frameStart)
                        frameList.add(message);
                }

                Thread.sleep(VIDEO_SERVER_RESPAWN);

            } catch (Throwable e) {
                Log.e(getClass().getName(), e.getMessage());
                e.printStackTrace();
            }
        } while (keepRunning);

        if (socket != null) {
            try {
                socket.close();
            } catch (Throwable e) {
                Log.e(getClass().getName(), e.getMessage());
                e.printStackTrace();
            }
        }
    }

The message is a line coming from the following text:

F
P2
160 120
1226
193 141 158 152 193 186 171 177 186 160 195 182 ... (160 times)
                         .
                         . (120 lines)
                         .
278 248 253 261 257 284 310 304 304 272 227 208 ... (160 times)


EF

In MainActivity I handle this trough this code:

class VideoReceiver extends BroadcastReceiver {
    final public Queue<Bitmap> imagesQueue = new LinkedList<>();

    @Override
    public void onReceive(Context context, Intent intent) {

        try {
            //Log.d(getClass().getName(), "onReceive() called");

            final byte[] data = intent.getByteArrayExtra(VideoService.VIDEO_MESSAGE_KEY);

            final Bitmap bitmap = BitmapFactory.decodeByteArray(data,0,data.length);

            imagesQueue.add(bitmap);

            runOnUiThread(updateVideoTask);
        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }
}

updateVideoTask task is defined like this:

    updateVideoTask = new Runnable() {
        public void run() {
            if (videoReceiver == null) return;

            if (!videoReceiver.imagesQueue.isEmpty())
            {
                final Bitmap image = videoReceiver.imagesQueue.poll();

                if (image == null) return;

                videoView.setImageBitmap(image);

                Log.d(this.getClass().getName(), "Images to spool: " + videoReceiver.imagesQueue.size());
            }
        }
    };

Unluckly when I run the application I notice a very low frame rate and a very big delay. I cannot argue what's going on. The only hints I got from logcat are these lines:

2019-05-20 16:37:08.817 29566-29580/it.tux.gcs I/art: Background sticky concurrent mark sweep GC freed 88152(3MB) AllocSpace objects, 3(52KB) LOS objects, 22% free, 7MB/10MB, paused 3.937ms total 111.782ms
2019-05-20 16:37:08.832 29566-29587/it.tux.gcs D/skia: Encode PNG Singlethread :      13003 us, width=160, height=120

even with the sum of all this delay (140 ms) the app should sustain a frame rate of more than 5Hz while am getting 0.25Hz or even worse.

After some investigation I found that moving:

Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);

out of the nested loops prevent GC from being invoked so frequently and I found another major source of delay in this line:

final String[] points = line.split("\\s+");

it burns out 2ms per time so I decided to go for something less smart but faster:

final String[] points = line.split(" ");

Anyway it's still not enough.. the code between:

canvas.drawBitmap(bitmap, 0, 0, null);

and

sendBroadcast(messageIntent);

still consume more than 200ms ... how can I do better than this?



from Optimization in bitmap creation

How to properly use smbj to connect and list files on a samba share in Android java?

When I connect with smbj, error log shows:

...
I/c.h.s.c.Connection: Successfully authenticated user on 192.168.1.222, session is 4399187361905
I/c.h.s.s.Session: Logging off session 4399187361905 from host 192.168.1.222
I/c.h.s.t.PacketReader: Thread[Packet Reader for 192.168.1.222,5,main] stopped.
I/c.h.s.c.Connection: Closed connection to 192.168.1.222
I/c.h.s.s.Session: Connecting to \\192.168.1.222\pop on session 4399187361905

Immediately - without any delay. So the connection is closed immediately after it is opened and it will them crash if I try to list files...

Caused by: com.hierynomus.smbj.common.SMBRuntimeException: com.hierynomus.protocol.transport.TransportException: Cannot write SMB2_TREE_CONNECT with message id << 4 >> as transport is disconnected

Which seems obvious since there's no open connection.

A question, related to smbj, indicated there was a problem with the way that person used the try statement... I believe this is a similar case.

Within an AsyncTask, I have:

try (Connection connection = client.connect(serverName)) {
    AuthenticationContext ac = new AuthenticationContext(username, password.toCharArray(), domain);
    Session session = connection.authenticate(ac);
    this.session = session;
    return session;
} catch (IOException e) {
    e.printStackTrace();
}

I'm certain there's a problem with the try-catch. Can someone please give a complete piece of code including the AsyncTask - as the smbj module github should have. I hope this will solve most issues for all users.



from How to properly use smbj to connect and list files on a samba share in Android java?

Get Angular component class from DOM element

I believe the answer to this is "no", but is there a method/service in Angular where I can pass in a component's root DOM node (e.g. <foo-component>) and receive the component instance (e.g. FooComponent)?

I couldn't find an associated SO post on this.

Example:

<foo-component id="foo"></foo-component>

const fooElement: HTMLElement = document.getElementById('foo');
const fooInstance: FooComponent = getInstanceFromElement(fooElement);

Is there a method in Angular like getInstanceFromElement?



from Get Angular component class from DOM element

Add new field to a MongoDB document parsing from String to Int using updateMany

In my MongoDB collection, all documents contain a mileage field which currently is a string. Using PHP, I'd like to add a second field which contains the same content, but as an integer value. Questions like How to change the type of a field? contain custom MongoDB code which I don't want to run using PHP, and questions like mongodb php Strings to float values retrieve all documents and loop over them.

Is there any way to use \MongoDB\Operation\UpdateMany for this, as this would put all the work to the database level? I've already tried this for static values (like: add the same string to all documents), but struggle with getting the data to be inserted from the collection itself.



from Add new field to a MongoDB document parsing from String to Int using updateMany

How to pass a string variable from one function to another?

I am trying to pass the test result from my test1 and test2 function, to the save_results function, so I can write the result to an Excel file, how should I change my code to achieve that?

import os
import xlswriter
from datetime import datetime
import time


def save_results():
    os.chdir(r'C:\Users\user\Documents\Results')
    workbook = xlsxwriter.Workbook(datetime_output_results+'.xlsx')
    worksheet = workbook.add_worksheet()
    bold = workbook.add_format({'bold': True})
    worksheet.set_column('A:A', 20)
    worksheet.write('A1', 'here i would like to write the test1 result')
    #worksheet.write('B1','here i would like to write the test2 result')
    workbook.close()


def test1():
    output = str(ser.read(1000).decode())
    output = str(output)
    if "0x1" in output :
        print('Pass')
        return 'Pass'
    else:
        print('Fail')
        return 'Fail'


def test2():
    output2 = str(ser.read(1000).decode())
    print(output2)
    test2_output = str(output2)
    if "0x1" in test2_output:
        print('Pass')
        return 'Pass'
    else:
        print('Fail')
        return 'Fail'


from How to pass a string variable from one function to another?

Android WebView: Download when content-type is appliaction/pdf and render web page when not

I have a specific scenario where I make a POST request with a unique ticket in the body to get a resulting page back.

The resulting page is either Content-Type: application/pdfor text/html. The ticket is only valid once, so the page can only be loaded once.

Problem is that Android WebView does not support rendering of pdf (as the equivalent on iOS do).

I've tried the following:

  1. Check http response headers with a primary request and then download the file with a second request if it's a pdf and open it in a PDF app (works). But for loading html pages, the second request fails, since the ticket is no longer valid.

  2. Download both the pdf and the html page and then open in pdf app/WebView locally. This works, both relative links in the web pages is broken. Is there a nice way to download them as well?

x. Is it possible to interrupt the request in the WebView to read the response headers and trigger a download if it's a pdf otherwise just continue rendering? Can't find a good answer for this.



from Android WebView: Download when content-type is appliaction/pdf and render web page when not

OSError: [Errno 8] Exec format error: 'chromedriver' using chromedriver on ubuntu server

I'm trying to use chromedriver with ubuntu (AWS instance). I've gotten chromedriver to work no problem in a local instance, but having many, many issues doing so in a remote instance.

I'm using the following code:

options = Options()
options.add_argument('--no-sandbox')
options.add_argument('--headless')
options.add_argument('--disable-dev-shm-usage')
options.add_argument("--remote-debugging-port=9222")

driver = webdriver.Chrome(executable_path='/usr/bin/chromedriver', chrome_options=options)

However, I keep getting this error:

    Traceback (most recent call last):
  File "test.py", line 39, in <module>
    driver = webdriver.Chrome()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 73, in __init__
    self.service.start()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 76, in start
    stdin=PIPE)
  File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.6/subprocess.py", line 1364, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 8] Exec format error: 'chromedriver'

I believe I'm using the most updated version of Selenium, Chrome, and chromedriver.

Chrome version is:Version 78.0.3904.70 (Official Build) (64-bit)

Selenium:

ubuntu@ip-172-31-31-200:/usr/bin$ pip3 show selenium
Name: selenium
Version: 3.141.0
Summary: Python bindings for Selenium
Home-page: https://github.com/SeleniumHQ/selenium/
Author: UNKNOWN
Author-email: UNKNOWN
License: Apache 2.0
Location: /home/ubuntu/.local/lib/python3.6/site-packages
Requires: urllib3

And, finally, for chromedriver, I'm almost certain I downloaded the most recent version here: https://chromedriver.storage.googleapis.com/index.html?path=78.0.3904.70/. It's the mac_64 version (I'm using Ubuntu on a Mac). I then placed chromedriver in /usr/bin , as I read that's common practice.

I have no idea why this isn't working. A few options I can think of:
1) some sort of access issue? I'm a beginner with command line and ubuntu - should I be running this as "root" user?

2) mis-match between chromedriver and chrome versions? Is there a way to tell which chromedriver version I have for certain?

3) I see that chromedriver and selenium are in different locations. Selenium is in: Location: /home/ubuntu/.local/lib/python3.6/site-packages and I've moved chromedriver to: /usr/bin . Could this be causing problems?

Any help appreciated as I'm stumped.



from OSError: [Errno 8] Exec format error: 'chromedriver' using chromedriver on ubuntu server

android keeping Cast Volume notification in pause state

I noticed that when setting a playback state to PlaybackStateCompat.STATE_PLAYING, I get the cast volume control as the main one. So when the user clicks on the phone volume control buttons, the cast volume changes. However, when the playback state is PlaybackStateCompat.STATE_PAUSED, when the user clicks on the volume button, the main volume notification is the default media one and the Cast volume is still in the list, but it is not the main one. The following code is how things are initialised:

mMediaSession = new MediaSessionCompat(getApplicationContext(), tag);
        final PlaybackStateCompat.Builder builder = new PlaybackStateCompat.Builder();
        builder.setState(PlaybackStateCompat.STATE_PAUSED,
                PlaybackStateCompat.PLAYBACK_POSITION_UNKNOWN, 1.0f);
        mMediaSession.setPlaybackState(builder.build());
        mMediaSession.setActive(true);
        mMediaSession.setPlaybackToRemote(volumeProvider);

I would like to have the Cast volume control to be the main one in PAUSED state. How to achieve that?

Thanks!



from android keeping Cast Volume notification in pause state

AOSP / Android 7: How is EGL utilized in detail?

I am trying to understand the Android (7) Graphics System from the system integrators point of view. My main focus is the minimum functionality that needs to be provided by libegl.

I understand that surfaceflinger is the main actor in this domain. Surfaceflinger initialized EGL, creates the actual EGL surface and acts as a consumer for buffers (frames) created by the app. The app again is executing the main part of required GLES calls. Obviously, this leads to restrictions as surfaceflinger and apps live in separate processes which is not the typical use case for GLES/EGL.

Things I do not understand:

  • Do apps on Android 7 always render into EGL_KHR_image buffers which are send to surfaceflinger? This would mean there's always an extra copy step (even when no composition is needed), as far as I understand... Or is there also some kind of optimized fullscreen mode, where apps do directly render into the final EGL surface?

  • Which inter-process sharing mechanisms are used here? My guess is that EGL_KHR_image, used with EGL_NATIVE_BUFFER_ANDROID, defines the exact binary format, so that an image object may be created in each process, where the memory is shared via ashmem. Is this already the complete/correct picture or do I miss something here?

I'd guess these are the main points I am lacking confident knowledge about, at the moment. For sure, I have some follow-up questions about this (like, how do gralloc/composition fit into this?), but, in accordance to this platform, I'd like to keep this question as compact as possible. Still, besides the main documentation page, I am missing documentation clearly targeted at system integrators. So further links would be really appreciated.

My current focus are typical use cases which would cover the vast majority of apps compatible with Android 7. If there are corner cases like long deprecated compatibility shims, I'd like to ignore them for now.



from AOSP / Android 7: How is EGL utilized in detail?

pygraphviz: finding the max rank node using successors

I'm trying to find the max rank node and the depth. Here is my code.

import pygraphviz as pgv


class Test:
    def __init__(self):
        self.G = pgv.AGraph(directed=True)

        self.G.add_node('a')
        self.G.add_node('b')
        self.G.add_node('c')
        self.G.add_node('d')
        self.G.add_node('e')
        self.G.add_node('f')

        self.G.add_edge('a', 'b')
        self.G.add_edge('b', 'c')
        self.G.add_edge('b', 'd')
        self.G.add_edge('d', 'e')
        self.G.add_edge('e', 'f')
        print(self.G.string())
        self.find_max_rank_node()

    def find_max_rank_node(self):
        nodes = self.G.nodes()
        depth = 0
        for n in nodes:
            layer1 = self.G.successors(n)
            if layer1:
                depth = depth + 1
                for layer_one in layer1:
                    layer2 = self.G.successors(layer_one)
                    print(n, layer2)


if __name__ == '__main__': Test()

The output should be f and 4. I started to code it but realized that I'm not going to know how depth the branch is ... and I'm not sure how to write the loop.



from pygraphviz: finding the max rank node using successors

How to Integrate android app with linkedin signin?

https://engineering.linkedin.com/blog/2018/12/developer-program-updates

From the above link I got information that Authentication, SDKs, and Plugins: SDKs: Our JavaScript and Mobile Software Development Kits (SDKs) will stop working. Developers will need to migrate to using OAuth 2.0 directly from their apps.

I want to add linkedin login in my android app. The only way seems to be is using oauth2 login, but it requires a backend change. I need to add an end point which will get the end point from oauth response and again request the info and store in some permanent storage. Then I have to call the backend from android app to fetch that info.
**or **
Is there any alternative way to do that without backend change?



from How to Integrate android app with linkedin signin?

Project has not been linked in Google play developer console, error came while I called the Google Developer API

I have integrated Google In-App subscription in one of my apps. It was integrated successfully. But, one mistake I have made while making subscription plans without a linked project with Google Play Developer Console. I found the same issue's answer on stackoverflow and everyone said that first link your project and then start to create subscriptions plans. I created a few more plans after the project linked, but this issue yet not fix.

Below, I have attached error snippets

enter image description here

enter image description here

Here, I saw you exact steps that I have performed.

  1. At first, we made two plans(in-app product, subscription) and then we linked the project through google play console.
  2. When we hit the google play developers API, we get error 403, "Project Id not linked to google play".
  3. After searching through StackOverflow, we got to know, first, the project is to be linked and then the subscription plans are to be added. So we made two more plans in the project after linking it.

However, the error still persists.

Can you please help us with why this error is coming despite the project being linked and subscription plans made after linking that project?

The below added are links that we refer for the solution. The API URL that we use: Purchases.subscriptions: get and Inappproducts: get

enter image description here

Also, here I have attached stackoverflow and GitHub links which have the answers on the same issue that I already tried:

Error: 'The project id used to call the Google Play Developer API has not been linked in the Google Play Developer Console.'

"message": "The project id used to call the Google Play Developer API has not been linked in the Google Play Developer Console."

https://github.com/googleapis/google-api-php-client/issues/1529

Please help me to resolve this issue.

Thank you.



from Project has not been linked in Google play developer console, error came while I called the Google Developer API

IllegalArgumentException: Invalid column DISTINCT bucket_display_name

I'm retrieving list of distinct folders list having video files with number of videos in each folder, and this is working fine in devices having Android P and below, but when I run on devices having Android Q the app crashes. How can I make it work for devices running Android Q

java.lang.IllegalArgumentException: Invalid column DISTINCT bucket_display_name

Logcat:

java.lang.IllegalArgumentException: Invalid column DISTINCT bucket_display_name
        at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:170)
        at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:140)
        at android.content.ContentProviderProxy.query(ContentProviderNative.java:423)
        at android.content.ContentResolver.query(ContentResolver.java:944)
        at android.content.ContentResolver.query(ContentResolver.java:880)
        at android.content.ContentResolver.query(ContentResolver.java:836)
        at com.aisar.mediaplayer.fragments.VideoFolderFragment$MediaQuery.getAllVideo(VideoFolderFragment.java:364)
        at com.aisar.mediaplayer.fragments.VideoFolderFragment$VideosLoader.loadVideos(VideoFolderFragment.java:434)
        at com.aisar.mediaplayer.fragments.VideoFolderFragment$VideosLoader.access$1100(VideoFolderFragment.java:413)
        at com.aisar.mediaplayer.fragments.VideoFolderFragment$5.run(VideoFolderFragment.java:189)
        at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:289)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
        at java.lang.Thread.run(Thread.java:919)

My Code:

public class MediaQuery {
        private Context context;
        private int count = 0;
        private Cursor cursor;
        List<ModelVideoFolder> videoItems;

        public MediaQuery(Context context) {
            this.context = context;
        }

        public List<ModelVideoFolder> getAllVideo(String query) {
            String selection = null;
            String[] projection = {
                    "DISTINCT " + MediaStore.Video.Media.BUCKET_DISPLAY_NAME,
                    MediaStore.Video.Media.BUCKET_ID
            };
            cursor = context.getContentResolver().query(
                    MediaStore.Video.Media.EXTERNAL_CONTENT_URI,
                    projection,
                    selection,
                    null,
                    query);
            videoItems = new ArrayList<>();
            ModelVideoFolder videoItem;
            while (cursor.moveToNext()) {
                videoItem = new ModelVideoFolder(
                        "" + cursor.getString(1),
                        "" + cursor.getString(0),
                        "",
                        "",
                        "" + getVideosCount(cursor.getString(1))
                );
                videoItems.add(videoItem);
            }
            return videoItems;
        }

        public int getVideosCount(String BUCKET_ID) {
            int count = 0;
            String selection = null;
            String[] projection = {
                    MediaStore.Video.Media.BUCKET_ID,
            };
            Cursor cursor = getActivity().getContentResolver().query(
                    MediaStore.Video.Media.EXTERNAL_CONTENT_URI,
                    projection,
                    selection,
                    null,
                    null);
            while (cursor.moveToNext()) {
                if (BUCKET_ID.equals(cursor.getString(0))) {
                    //add only those videos that are in selected/chosen folder
                    count++;
                }
            }
            return count;
        }
    }


from IllegalArgumentException: Invalid column DISTINCT bucket_display_name

iOS 13 - How to login in in-app purchase Sandbox account?

In order to test my in-app purchases with iOS 13 I have updates one of my test devices to iOS 13.1 beta.

On iOS 12 and before I there was a special Sandbox login in Settings/iTunes & App Store/Sandbox Account:

However, after the update to iOS 13 this section is missing. I tried to follow the instruction in the answer to a similar problem on iOS 12 but nothing worked. I completely rest the device and logged out both at Settings/iTunes & App Store and at Settings/Apple ID, so currently the device is not connected to any account (real or sandbox). At least as far as I can tell.

I have re-installed my app on the devices using Xcode 11 beta and tried to perform an in-app purchase. The store shows a login prompt which shows that this is a Sandbox purchase. However, the prompt only ask for the password, not for a username or Apple ID. So I have no idea which account should be used here.

When using the password of a newly created test user account the password is not known. I can use the password of my real Apple ID account which was used during the device setup and was than disconnected.

So, how to connect to a specific Sandbox account in iOS 13?



from iOS 13 - How to login in in-app purchase Sandbox account?

Plot for every 10 minutes in datetime

The 'df' I am using has multiple rows for each datetime. I want to plot a scatterplot of all coordinates with the same datetime for every 10 minutes.

It works if I manually input the times into t_list = [datetime(2017, 12, 23, 06, 00, 00), datetime(2017, 12, 23, 06, 10, 00), datetime(2017, 12, 23, 06, 20, 00)]but I want to replace this with something that uses the dates from df so I can use it for multiple datasets.

import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
import numpy as np

df_data = pd.read_csv('C:\data.csv')
df_data['datetime'] = pd.to_datetime(df_data['TimeStamp'] )
df = df_data[(df_data['datetime']>= datetime(2017, 12, 23, 06,00, 00)) &
         (df_data['datetime']< datetime(2017, 12, 23, 07, 00, 00))]

##want a time array for all of the datetimes in the df
t_list = [datetime(2017, 12, 23, 06, 00, 00), datetime(2017, 12, 23, 06, 10, 00), 
datetime(2017, 12, 
23, 06, 20, 00)]

for t in t_list:
    t_end = t + timedelta(minutes = 10)
    t_text = t.strftime("%d-%b-%Y (%H:%M)")

    #boolean indexing with multiple conditions, you should wrap each single condition in brackets
    df_t = df[(df['datetime']>=t) & (df['datetime']<t_end)]

    #get data into variable
    ws = df_t['Sp_mean']
    lat = df_t['x']
    lon = df_t['y']
    col = 0.75

    #calc min/max for setting scale on images
    min_ws = df['Sp_mean'].min()
    max_ws = df['Sp_mean'].max()

    plt.figure(figsize=(15,10))
    plt.scatter(lon, lat, c=ws,s=300, vmin=min_ws, vmax=max_ws)  
    plt.title('event' + t_text,fontweight = 'bold',fontsize=18)
    plt.show()

I have tried a few ways of attempting to make a copy of datetime as an iterable list which haven't given me the results I am after, the most recent below:

date_arrray = np.arange(np.datetime64(df['datetime']))
df['timedelta'] = pd.to_timedelta(df['datetime'])


from Plot for every 10 minutes in datetime

How handle i18n(with fallback) on MongoDB when translation is in another Collection?

Given these excerpt collections:

Translation Collection

[
    {
      "_id": "id01_name",
      "text": "Item's Name"
    },
    {
      "_id": "id01_desc",
      "text": "Item's lore description"
    },
    {
      "_id": "sk_id",
      "text": "Item's skill description"
    },

  ]

Item Collection

[
    {
      "_id": "id01",
      "name": "id01_name",
      "lore_description": "id01_desc",
      "skill": {
        "description": "sk_id01",
      }
    }
  ]

Question:

Using only mongodb driver (NO Mongo ODM, like mongoose, iridium, etc), what is the best approach for (i18n) Internationalization (with fallback) on MongoDB when translation is in another Collection?


MongoDB's aggregate Approach

Currently, I'm using aggregate with $lookup to do these kind of queries.

db.artifact.aggregate([
  { 
    $lookup: { // get the name's data from the english translation
      from: "text_en", 
      localField: "name",
      foreignField: "_id",
      as: "name"
    },

  },
  {
    $unwind: "$name" //unwind because the lookup made name become an array with _id and text
  },
  {
    $addFields: {
      name: "$name.text" //rewrite name (currently an obj) into the translation string
    }
  },

(...etc)

Problem is, I need these 3 steps on every single key I need to translate. On a big document, looks a bit too much, and every $lookup feels like the response time increases bit by bit.

This also doesn't have fallback cases, in case key is not available in said language, e.g., try $lookup on id01_name in the Spanish collection but the collection doesn't have it, so a fallback would be get from English collection.

Here a working example of the example above: https://mongoplayground.net/p/umuPQYriFRe


Manual aggregation Approach

I also thought in doing in phases, that is,

  • item.find() to get item data;
  • translation_english.find({ $in: [ (...listofkeys) ]}) to get all necessary english keys
  • translation_{otherlang}.find({ $in: [ (...listofkeys) ]}) to get all necessary other language keys
  • manually transform the two translation arrays into two objects (with cursor.forEach()), merge with Object.assign({},eng,otherlang)
  • handpicking each key and assigning to the item object

This method covers the fallback, but it's bloated/very verbose.


On a test with both attempts, lookup took 310ms and the manual took 500ms to complete, that for one lookup. But manual has fallback (aka queries 3 collections; item, lang_en, lang_{foreign}). If language key does not exist in foreign, it'll pick the en (it's assumed eng will never miss a key). While lookup fails to return a document when the lookup on foreign lang fails.



from How handle i18n(with fallback) on MongoDB when translation is in another Collection?

DidReceiveNotificationRequest in not getting called

I have a xamarin forms application and ios Notification service extension is not getting called when I receive notification from server.

I have done the following things so far:

  1. Have added the mutable-content = 1 in the apns payload.

  2. This is how I manipulate the apns payload in the service

    public class NotificationService : UNNotificationServiceExtension
    {
        Action<UNNotificationContent> ContentHandler { get; set; }
        UNMutableNotificationContent BestAttemptContent { get; set; }

        protected NotificationService(IntPtr handle) : base(handle)
        {

        }

        public override void DidReceiveNotificationRequest(UNNotificationRequest request, Action<UNNotificationContent> contentHandler)
        {
            ContentHandler = contentHandler;
            BestAttemptContent = (UNMutableNotificationContent)request.Content.MutableCopy();

            var newAlertContent = new UNMutableNotificationContent
            {
                Body = "Body from Service",
                Title = "Title from Service",
                Sound = BestAttemptContent.Sound,
                Badge = 2
            };
            ContentHandler(newAlertContent);
        }

        public override void TimeWillExpire()
        {
        }
    }
  1. I also have the the notification service extension bundle id done.(My app bundle id is com.companyname.appname.test and the extension bundle id is com.companyname.appname.test.xxxxServiceExtension

  2. In the AppDelegate class in Finishlaunching method I also have the permission code added.

  UNUserNotificationCenter.Current.RequestAuthorization(UNAuthorizationOptions.Alert, (approved, err) => {
            });

Is there anything else that I need to do?



from DidReceiveNotificationRequest in not getting called

How to prevent the datatable pulling data when the user paginate backward?

Im using datatable with serverside processing.

Sample scenario is that:

When the user paginate into the datatable page 1, page 2 and then page 3 it will pull the data from the serverside. Is there a way that when the user paginate back, since the data has been pulled into the server, it will not pull anymore. I want the previous data stored. Is there any way to do it? Currently I'm reading the stateSave property of datatable. TIA



from How to prevent the datatable pulling data when the user paginate backward?

Detect line split from gmail in clipboard (angular)

I'm trying to allow users to paste from gmail into a field, and detect line breaks. It does not need to be a text area, I just want to detect the line breaks.

The problem is that when a user pastes something like below...clipboard does not seem to detect the break (even if user had hit enter in gmail)

Item 1 
Item 2 
Item 3

To detect it, the user seems to have to have hit enter twice...like below:

Item 1

Item 2

Item 3

Is there a way to detect line breaks from gmail?

Below seems to work for notepad, inbox and other areas where I copy from.

Stackblitz Demo

Component:

<input type="text" placeholder="paste items here" (paste)="onPaste(i, $event, $event)">

<div *ngIf="itemArray.length > 0">
Item list:
</div>

<div *ngFor="let item of itemArray">
  
</div>

TS:

itemArray = [];

  onPaste(i, event: ClipboardEvent, value) {
    let clipboardData = event.clipboardData;
    let pastedText = clipboardData.getData('text/plain').split("\n");
    pastedText.forEach(item => {
      item = item.toString()
      item = item.replace(/(\r\n|\n|\r|\s\r|\r)/gm, "")
      this.itemArray.push(item) 
    })
  }


from Detect line split from gmail in clipboard (angular)

Wednesday, 30 October 2019

Passing data from angular app to angular library

I need to be able to pass some data (configuration) from angular app to angular library. The requirement is that the configuration needs to be accessible in the decorator of library's NgModule, so I can import some modules conditionally. For example:

@NgModule({
    imports: [
        !envService.data.production ? StoreDevtoolsModule.instrument() : [],
    ]
})
export class LibrarysModule { }

I can't find any documentation on this and there is no clear answer on github in any of the topics I found. Is there any reliable approach for angular 8?

There are couple of similar answer over there on stack overflow, but none of them suits my use case (see: passing environment variables to angular2 library). They all explains how to use forRoot() in order to pass the configration to any service or component of the library. However, I need the configuration in library module's decorator.



from Passing data from angular app to angular library

Flickity Carousel disable custom navigation when reaching last slide

I'm currently using the Flickity Carousel to create a carousel with different film content panels.

The carousel is using a custom navigation to control it, rather than the standard one that comes with the carousel. However I'm struggling to disable the next navigation button when you reach the end of the carousel slides. Here is an example of what I'm trying to achieve and have based my code on this.

You will see from my example that the Previous button works correctly and is disabled when you first land on the carousel. However the Next button is never disabled when reaching the end.

Here is a JSFiddle

My code:

$(document).ready(function () {
  $('.carousel-container').each(function (i, container) {
        var options = {
            cellAlign:'left',
            groupCells:'3',
            pageDots: false,
            prevNextButtons: false
        };

        $('.carousel__slides').flickity(options);
        var $container = $(container);
        var $slider = $container.find('.carousel__slides');
        var flkty = $slider.data('flickity');
        var selectedIndex = flkty.selectedIndex;
        var slideCount = flkty.slides.length;
        var $prev = $container.find('.prev');
        var $next = $container.find('.next');

        // previous
        $prev.on('click', function () {
            $slider.flickity('previous');
        });

        // next
        $next.on('click', function () {
            $slider.flickity('next');
        });

        $slider.on( 'select.flickity', function() {

            // enable/disable previous/next buttons
            if ( !flkty.cells[ flkty.selectedIndex - 1 ] ) {
              $prev.attr( 'disabled', 'disabled' );
              $next.removeAttr('disabled'); // <-- remove disabled from the next
            } else if ( !flkty.cells[ flkty.selectedIndex +1 ] ) {
              $next.attr( 'disabled', 'disabled' );
              $prev.removeAttr('disabled'); //<-- remove disabled from the prev
            } else {
              $prev.removeAttr('disabled');
              $next.removeAttr('disabled');
            }
        });
    });
});
.carousel-container {
  position:relative;
  }

.carousel__slide {
  width: 20%;
  max-width:286px;
  opacity: 0.5;
    
}

.carousel__slide.is-selected {
  opacity: 1;
}


.carousel__nav {
  display:block;
}

.carousel__nav button {
  width:65px;
  height:50px;
  background:red;
  border-radius:0 100% 100% 0;
  position: absolute;
  top: 80px;
  cursor:pointer;
  border:none;
  outline:0;
  transition-duration: 0.3s;
  transition-property: all;
}

.carousel__nav button:hover,
.carousel__nav button:active,
.carousel__nav button:focus {
  background:green;
  outline:0;
}

.carousel__nav button:disabled {
  background:black;
   opacity: 0.5;
}

.carousel__nav button i {
  content:'';
  display:block;
  margin:0 auto;
  width: 0;
  height: 0;
  border-style: solid;
  border-width: 10.5px 0 10.5px 14px;
  border-color: transparent transparent transparent white;
}

.carousel__nav .prev {
  left:0;
}

.carousel__nav .prev i {
  transform:rotate(180deg);
}

.carousel__nav .next {
  right:0;
  border-radius:100% 0 0 100%;
}


.film-section {
  margin-top:50px;
}

.film-item {
  padding:0 15px;
}

.film-item p {
  font-size:1.4rem;
  line-height:2.6rem;
  margin-bottom:0;
}

.film-item__image {
    position:relative;
}

.film-item__play {
  width:65px;
  height:65px;
  border-radius:100% 0 0 0;
  position:absolute;
  right:0;
  bottom:0;
  background:rgba(0,0,0,0.4);
  border:none;
  transition-duration: 0.3s;
  transition-property: all;
}

.film-item__play:hover,
.film-item__play:active,
.film-item__play:focus {
    background:red;
    outline:0;
}

.film-item__play:after {
    content:'';
    display:block;
    margin:0 auto;
    width: 0;
    height: 0;
    border-style: solid;
    border-width: 10.5px 0 10.5px 14px;
    border-color: transparent transparent transparent white;
    position:absolute;
    top:31px;
    left:33px;
}

.heading-content {
    display:none;
    opacity: 0;
    visibility: hidden;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://unpkg.com/flickity@2/dist/flickity.pkgd.min.js"></script>
<link href="https://unpkg.com/flickity@2/dist/flickity.min.css" rel="stylesheet"/>

<div class="carousel-container">
  <div class="carousel__slides">
      <div class="carousel__slide">
        <div class="offset-slide"></div>
      </div>
      <div class="carousel__slide">
        <div class="film-item">
          <div class="film-item__image">
            <img class="w-100" src="http://placekitten.com/510/380" alt="">
          </div>
          <h3>Heading</h3>
          <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
        </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
              <h3 class="js-video-heading heading-content">Universitat Oberta de Catalunya</h3>
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
          <div class="film-item">
            <div class="film-item__image">
              <img class="w-100" src="http://placekitten.com/510/380" alt="">
            </div>
            <h3>Heading</h3>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat augue at quam ultrices euismod vel ac sapien. Ut finibus posuere augue, eget condimentum nunc porttitor in. Praesent in ornare mi, at rhoncus felis. In iaculis viverra sem sit amet lacinia.</p>
          </div>
      </div>
      <div class="carousel__slide">
        <div class="offset-slide"></div>
      </div>
  </div>
  <div class="carousel__nav">
    <button class="prev" disabled><i></i></button>
    <button class="next"><i></i></button>
  </div>
</div>


from Flickity Carousel disable custom navigation when reaching last slide

How to remove these D3.js console log error

I have the error below, after trying out some other work arounds (like trying out the suggested solutions to similar problem)- no success took place.

Error: <rect> attribute height: Expected length, "NaN"

Could someone take a look at my code and help me in changing it such that these errors dissappear? Thanks in advanced.

This is how the error looks Below is my js - code:

I am using the current latest version of D3.js

I am not that strong in D3.js yet (still learning).

var svg = d3.select("svg"),
    margin = {top: 20, right: 20, bottom: 30, left: 40},
    width = +svg.attr("width") - margin.left - margin.right,
    height = +svg.attr("height") - margin.top - margin.bottom,
    g = svg.append("g").attr("transform", "translate(" + margin.left + "," + margin.top + ")");

// The scale spacing the groups:
var x0 = d3.scaleBand()
    .rangeRound([0, width])
    .paddingInner(0.1);

// The scale for spacing each group's bar:
var x1 = d3.scaleBand()
    .padding(0.05);

var y = d3.scaleLinear()
    .rangeRound([height, 0]);

var z = d3.scaleOrdinal()
    .range(["#008000", "#8a89a6", "#7b6888", "#008080", "#ff0000", "#d0743c", "#ff8c00"]);


   // trying to add tooltips 
var div = d3.select("body").append("div")   
    .attr("class", "tooltip")               
    .style("opacity", 0);

d3.csv("data.csv", function(d, i, columns) {
  for (var i = 1, n = columns.length; i < n; ++i) d[columns[i]] = +d[columns[i]];
  return d;
}, function(error, data) {
  if (error) throw error;

  var keys = data.columns.slice(1);

  x0.domain(data.map(function(d) { return d.years; }));
  x1.domain(keys).rangeRound([0, x0.bandwidth()]);
  y.domain([0, d3.max(data, function(d) { return d3.max(keys, function(key) { return d[key]; }); })]).nice();

 g.append("g")
    .selectAll("g")
    .data(data)
    .enter().append("g")
    .attr("class","bar")
    .attr("transform", function(d) { return "translate(" + x0(d.years) + ",0)"; })
    .selectAll("rect")
    .data(function(d) { return keys.map(function(key) { return {key: key, value: d[key]}; }); })
    .enter().append("rect")
      .attr("x", function(d) { return x1(d.key); })
      .attr("y", function(d) { return y(d.value); })
      .attr("width", x1.bandwidth())
      .attr("height", function(d) { return height - y(d.value); })
      .attr("fill", function(d) { return z(d.key); });

  g.append("g")
      .attr("class", "axis")
      .attr("transform", "translate(0," + height + ")")
      .call(d3.axisBottom(x0));

  g.append("g")
      .attr("class", "y axis")
      .call(d3.axisLeft(y).ticks(null, "s"))
    .append("text")
      .attr("x", 2)
      .attr("y", y(y.ticks().pop()) + 0.5)
      .attr("dy", "0.32em")
      .attr("fill", "#000")
      .attr("font-weight", "bold")
      .attr("text-anchor", "start")
      .text("ZAR");

  var legend = g.append("g")
      .attr("font-family", "sans-serif")
      .attr("font-size", 10)
      .attr("text-anchor", "end")
    .selectAll("g")
    .data(keys.slice().reverse())
    .enter().append("g")
      .attr("transform", function(d, i) { return "translate(0," + i * 20 + ")"; });

  legend.append("rect")
      .attr("x", width - 17)
      .attr("width", 15)
      .attr("height", 15)
      .attr("fill", z)
      .attr("stroke", z)
      .attr("stroke-width",2)
      .on("click",function(d) { update(d) });

  legend.append("text")
      .attr("x", width - 24)
      .attr("y", 4.5)
      .attr("dy", "0.32em")
      .text(function(d) { return d;});


  var filtered = [];

  ////
  //// Update and transition on click:
  ////

  function update(d) {  

    //
    // Update the array to filter the chart by:
    //

    // add the clicked key if not included:
    if (filtered.indexOf(d) == -1) {
     filtered.push(d); 
      // if all bars are un-checked, reset:
      if(filtered.length == keys.length) filtered = [];
    }
    // otherwise remove it:
    else {
      filtered.splice(filtered.indexOf(d), 1);
    }

    //
    // Update the scales for each group(/years)'s items:
    //
    var newKeys = [];
    keys.forEach(function(d) {
      if (filtered.indexOf(d) == -1 ) {
        newKeys.push(d);
      }
    })
    x1.domain(newKeys).rangeRound([0, x0.bandwidth()]);
    y.domain([0, d3.max(data, function(d) { return d3.max(keys, function(key) { if (filtered.indexOf(key) == -1) return d[key]; }); })]).nice();

    // update the y axis:
            svg.select(".y")
            .transition()
            .call(d3.axisLeft(y).ticks(null, "s"))
            .duration(500);


    //
    // Filter out the bands that need to be hidden:
    //
    var bars = svg.selectAll(".bar").selectAll("rect")
      .data(function(d) { return keys.map(function(key) { return {key: key, value: d[key]}; }); })

   bars.filter(function(d) {
         return filtered.indexOf(d.key) > -1;
      })
      .transition()
      .attr("x", function(d) {
        return (+d3.select(this).attr("x")) + (+d3.select(this).attr("width"))/2;  
      })
      .attr("height",0)
      .attr("width",0)     
      .attr("y", function(d) { return height; })
      .duration(500);

    //


    // Adjust the remaining bars:
    //
    bars.filter(function(d) {
        return filtered.indexOf(d.key) == -1;
      })
      .transition()
      .attr("x", function(d) { return x1(d.key); })
      .attr("y", function(d) { return y(d.value); })
      .attr("height", function(d) { return height - y(d.value); })
      .attr("width", x1.bandwidth())
      .attr("fill", function(d) { return z(d.key); })
      .duration(500);


    // update legend:
    legend.selectAll("rect")
      .transition()
      .attr("fill",function(d) {
        if (filtered.length) {
          if (filtered.indexOf(d) == -1) {
            return z(d); 
          }
           else {
            return "white"; 
          }
        }
        else {
         return z(d); 
        }
      })
      .duration(100);

      legend.selectAll("bar")
      .text(function(d, i) { return label[i]; });


  }

});

</script>



from How to remove these D3.js console log error

How to find and replace word in text from MySQL database?

A have a sentence:

"How to find and replace word in text from mysql database?"

And MySQL table words, with to 3 column id, word and replaceWord. I have more than 4000 words in databese.

Table:

id     word     replaceWord
 1     text     sentence
 2     word     letter
 3     mysql    MySQL
 4     ..       ...
 5     ..       ...
 6     ..       ...

Result:

"How to find and replace letter in sentence from MySQL database?"

I know how to do this without database, but i need database.

 <?php
$text="How to find and replace word in text from mysql database?";
$replaceWord=array(  "text" => "sentence", "word" => "letter", "mysql" => "MySQL");
echo strtr($tekst, $replaceWord);
?>


from How to find and replace word in text from MySQL database?

GradienTape convergence much slower than Keras.model.fit

I am currently trying to get a hold of the TF2.0 api, but as I compared the GradientTape to a regular keras.Model.fit I noticed:

  1. It ran slower(probably due to the Eager Execution)

  2. It converged much slower (and I am not sure why).

+--------+--------------+------------------+
|  Epoch | GradientTape | keras.Model.fit  |
+--------+--------------+------------------+
|    1   |     0.905    |      0.8793      |
+--------+--------------+------------------+
|    2   |     0.352    |      0.2226      |
+--------+--------------+------------------+
|    3   |     0.285    |      0.1192      |
+--------+--------------+------------------+
|    4   |     0.282    |      0.1029      |
+--------+--------------+------------------+
|    5   |     0.275    |      0.0940      |
+--------+--------------+------------------+

Here is the training loop I used with the GradientTape:


optimizer = keras.optimizers.Adam()
glove_model = GloveModel(vocab_size=len(labels))
train_loss = keras.metrics.Mean(name='train_loss')

@tf.function
def train_step(examples, labels):
    with tf.GradientTape() as tape:
        predictions = glove_model(examples)
        loss = glove_model.glove_loss(labels, predictions)

    gradients = tape.gradient(loss, glove_model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, glove_model.trainable_variables))

    train_loss(loss)



total_step = 0
for epoch in range(epochs_number):

    pbar = tqdm(train_ds.enumerate(), total=int(len(index_data) / batch_size) + 1)

    for ix, (examples, labels) in pbar:

        train_step(examples, labels)


    print(f"Epoch {epoch + 1}, Loss {train_loss.result()}")

    # Reset the metrics for the next epoch
    train_loss.reset_states()

And here is the Keras.Model.fit training:

glove_model.compile(optimizer, glove_model.glove_loss)
glove_model.fit(train_ds, epochs=epochs_number)

And Here is the model.

class GloveModel(keras.Model):

    def __init__(self, vocab_size, dim=100, a=3/4, x_max=100):
        super(GloveModel, self).__init__()

        self.vocab_size = vocab_size
        self.dim = dim
        self.a = a
        self.x_max = x_max

        self.target_embedding = layers.Embedding(
            input_dim=self.vocab_size, output_dim=self.dim, input_length=1, name="target_embedding"
        )
        self.target_bias = layers.Embedding(
            input_dim=self.vocab_size, output_dim=1, input_length=1, name="target_bias"
        )

        self.context_embedding = layers.Embedding(
            input_dim=self.vocab_size, output_dim=self.dim, input_length=1, name="context_embedding"
        )
        self.context_bias = layers.Embedding(
            input_dim=self.vocab_size, output_dim=1, input_length=1, name="context_bias"
        )

        self.dot_product = layers.Dot(axes=-1, name="dot")

        self.prediction = layers.Add(name="add")
        self.step = 0

    def call(self, inputs):

        target_ix = inputs[:, 0]
        context_ix = inputs[:, 1]

        target_embedding = self.target_embedding(target_ix)
        target_bias = self.target_bias(target_ix)

        context_embedding = self.context_embedding(context_ix)
        context_bias = self.context_bias(context_ix)

        dot_product = self.dot_product([target_embedding, context_embedding])
        prediction = self.prediction([dot_product, target_bias, context_bias])

        return prediction

    def glove_loss(self, y_true, y_pred):

        weight = tf.math.minimum(
            tf.math.pow(y_true/self.x_max, self.a), 1.0
        )
        loss_value = tf.math.reduce_mean(weight * tf.math.pow(y_pred - tf.math.log(y_true), 2.0))

        return loss_value



I tried multiple configurations and optimizers but nothing seems to change the convergence rate.



from GradienTape convergence much slower than Keras.model.fit

How to highlight search text from string of html content without breaking

I am looking for some solution that help to search term from html string with highlight feature. I can do this by removing html content from string. But then issue is I will not able to see it original content with highlight. I do have following function that can search & highlight string without html markup.

private static updateFilterHTMLValue(value: string, filterText: string): string
{
    if (value == null) {
        return value;
    }

    let filterIndex: number = value.toLowerCase().indexOf(filterText);
    if (filterIndex < 0) {
        return null;
    } 
    return value.substr(0, filterIndex) 
        + "<span class='search-highlight'>" 
        + value.substr(filterIndex, filterText.length) 
        + "</span>" 
        +   value.substr(filterIndex + filterText.length, value.length - (filterIndex + filterText.length));
}

So to manage the search on string with html, I created new function that can search string with html. ( I am removing html part before searching for proper string matching)

private static test(value: string, filterText: string): string {
    if (value == null) {
        return value;
    }
    // Check for raw data without html
    let valueWithoutHtml = TextFilterUtils.removeTextHtmlTags(value);
    let filterIndex: number = valueWithoutHtml.toLowerCase().indexOf(filterText);
    if (filterIndex < 0) {
        return null;
    } else {
        // TODO: 
        // just need to figure how we can highlight properly 
        // real issue is to identify proper index for warping   <span class='search-highlight'> </span> 
        return "";
    }
}

How can we do warping on string of html ? Any help or guidance will be really appreciated.



from How to highlight search text from string of html content without breaking

Custom layout for a radiobutton

I made a custom layout that i want to implement for a radiobutton. The code for the android class is here :

    public class MyRadioButton extends LinearLayout implements View.OnClickListener{
    private ImageView iv;
    private TextView tv;
    private RadioButton rb;

    private View view;

    public MyRadioButton(Context context) {
        super(context);
        view = View.inflate(context, R.layout.my_radio_button, this);
        setOrientation(HORIZONTAL);

        rb = (RadioButton) view.findViewById(R.id.radioButton1);
        tv = (TextView) view.findViewById(R.id.textView1);
        iv = (ImageView) view.findViewById(R.id.imageView1);

        view.setOnClickListener(this);
        rb.setOnCheckedChangeListener(null);
    }

    public void setImageBitmap(Bitmap bitmap) {
        iv.setImageBitmap(bitmap);
    }

    public View getView() {
        return view;
    }

    @Override
    public void onClick(View v) {

        boolean nextState = !rb.isChecked();

        LinearLayout lGroup = (LinearLayout)view.getParent();
        if(lGroup != null){
            int child = lGroup.getChildCount();
            for(int i=0; i<child; i++){
                //uncheck all
                ((RadioButton)lGroup.getChildAt(i).findViewById(R.id.radioButton1)).setChecked(false);
            }
        }

        rb.setChecked(nextState);
    }

    public void setImage(Bitmap b){
        iv.setImageBitmap(b);
    }

    public void setText(String text){
        tv.setText(text);
    }

    public void setChecked(boolean isChecked){
        rb.setChecked(isChecked);
    }
}

And the code for layout is here

 <?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:gravity="center_vertical"
    android:orientation="horizontal" >

    <RadioButton
        android:id="@+id/radioButton1"
        android:layout_width="wrap_content"
        android:layout_height="match_parent"
        android:gravity="top"
        android:text="" />

    <LinearLayout
        android:orientation="vertical"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content">

        <TextView
            android:id="@+id/textView1"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="Medium Text"
            android:textAppearance="?android:attr/textAppearanceMedium" />

        <ImageView
        android:id="@+id/imageView1"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:src="@drawable/wow_visa_prepaid" />

    </LinearLayout>
</LinearLayout>

At this moment i cant figure out how to change the inheritance from LinearLayout to RadioButton and to keep the same layout.



from Custom layout for a radiobutton

Dynamic import of Javascript module from stream

Goal: To support dynamic loading of Javascript modules contingent on some security or defined user role requirement such that even if the name of the module is identified in dev tools, it cannot be successfully imported via the console.

enter image description here

enter image description here

enter image description here

A JavaScript module can be easily uploaded to a cloud storage service like Firebase (#AskFirebase) and the code can be conditionally retrieved using a Firebase Cloud Function firebase.functions().httpsCallable("ghost"); based on the presence of a custom claim or similar test.

export const ghost = functions.https.onCall(async (data, context) => {
  if (! context.auth.token.restrictedAccess === true) {
    throw new functions.https.HttpsError('failed-precondition', 'The function must be called while authenticated.');
  }

  const storage = new Storage();
  const bucketName = 'bucket-name.appspot.com';
  const srcFilename = 'RestrictedChunk.chunk.js';

  // Downloads the file
  const response = await storage
    .bucket(bucketName)
    .file(srcFilename).download();
  const code = String.fromCharCode.apply(String, response[0])

  return {source: code};

})

In the end, what I want to do...

...is take a webpack'ed React component, put it in the cloud, conditionally download it to the client after a server-side security check, and import() it into the user's client environment and render it.

Storing the Javascript in the cloud and conditionally downloading to the client are easy. Once I have the webpack'ed code in the client, I can use Function(downloadedRestrictedComponent) to add it to the user's environment much as one would use import('./RestrictedComponent') but what I can't figure out is how to get the default export from the component so I can actually render the thing.

import(pathToComponent) returns the loaded module, and as far as I know there is no option to pass import() a string or a stream, just a path to the module. And Function(downloadedComponent) will add the downloaded code into the client environment but I don't know how to access the module's export(s) to render the dynamically loaded React components.

Is there any way to dynamically import a Javascript module from a downloaded stream?

edit to add: Thanks for the reply. Not familiar with the nuances of Blobs and URL.createObjectURL. Any idea why this would be not found?

const ghost = firebase.functions().httpsCallable("ghost");

const LoadableRestricted = Loadable({
  //  loader: () => import(/* webpackChunkName: "Restricted" */ "./Restricted"),
  loader: async () => {
    const ghostContents = await ghost();
    console.log(ghostContents);
    const myBlob = new Blob([ghostContents.data.source], {
      type: "application/javascript"
    });
    console.log(myBlob);
    const myURL = URL.createObjectURL(myBlob);
    console.log(myURL);
    return import(myURL);
  },
  render(loaded, props) {
    console.log(loaded);
    let Component = loaded.Restricted;
    return <Component {...props} />;
  },
  loading: Loading,
  delay: 2000
});

enter image description here



from Dynamic import of Javascript module from stream

how to turn speaker on/off programmatically in android Pie and UP

Same as this question and many others from a few years ago: how to turn speaker on/off programmatically in android 4.0

It seems that Android has changed the way it handles this.

here are the things I tried to make an Outgoing call to have a speakerphone programmatically. None of these solutions worked for me on Android Pie. while they seem to work well on Android Nougat and Oreo

Solution 1.

final static int FOR_MEDIA = 1;
final static int FORCE_NONE = 0;
final static int FORCE_SPEAKER = 1;

Class audioSystemClass = Class.forName("android.media.AudioSystem");
Method setForceUse = audioSystemClass.getMethod("setForceUse", int.class, int.class);
setForceUse.invoke(null, FOR_MEDIA, FORCE_SPEAKER);

2.

AudioManager audioManager = (AudioManager)getApplicationContext().getSystemService(Context.AUDIO_SERVICE);
if (audioManager != null) {
   audioManager.setMode(AudioManager.MODE_IN_COMMUNICATION);
   audioManager.setSpeakerphoneOn(true);

3.

AudioManager audioManager = (AudioManager)getApplicationContext().getSystemService(Context.AUDIO_SERVICE);
if (audioManager != null) {
   audioManager.setMode(AudioManager.MODE_IN_CALL);
   audioManager.setSpeakerphoneOn(true);

4.

Thread thread = new Thread() {
    @Override
    public void run() {
        try {
            while(true) {
                sleep(1000);
                audioManager.setMode(AudioManager.MODE_IN_CALL);
                if (!audioManager.isSpeakerphoneOn())
                    audioManager.setSpeakerphoneOn(true);
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
};
thread.start();

the app has the following permissions granted among many others:

<uses-permission android:name="android.permission.CALL_PHONE"/>
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
<uses-permission android:name="android.permission.MODIFY_PHONE_STATE" />

I have also tried the following solution only for outgoing calls. and it also didn't work.

Intent callIntent = new Intent(Intent.ACTION_CALL);
callIntent.putExtra("speaker", true);
callIntent.setData(Uri.parse("tel:" + number));
context.startActivity(callIntent);


from how to turn speaker on/off programmatically in android Pie and UP

Flask API to provide JSON files to a simple HTML+JS+CSS webapp while keeping it secure

I've made a simple webapp that is going to show some data in a table, which will be updated weekly.

This update it done in the backend with some python code, that scrapes and alters some data, before putting it in a SQLite database.

After doing some reading I learned that to deliver that data to my webapp I should make a API with Flask, that can take that data and deliver it to the JS in my webapp in form of JSON, which then can use the data to populate the table. However, I should secure my API with username and pw. But as its a JS frontend that will retrieve data from the API, there is really no point, as the username and pw will have to be hardcoded into JS, which then can be read by the users. (I think)

Should I expose my API to everyone, or is this not the way to go to be able to use SQLite data as a backend for my webapp? I am fine keeping the API to a GET only.



from Flask API to provide JSON files to a simple HTML+JS+CSS webapp while keeping it secure

How can I make an Android app compatible with BrailleBack?

I'm looking to make the Android app I work on more accessible, and was wondering if we need to do anything special to make it compatible with services like BrailleBack so it can be used with braille readers like this one: https://uk.optelec.com/products/abc-640-en-uk-alva-bc640.html

We've already used tools like the accessibility scanner to help us identify changes that can make the app compatible with screen readers. Can anyone recommend any tutorials, etc. that can help us build compatibility with braille readers and services like BrailleBack? Or is it just very similar approach to services like TalkBack, technically speaking?

I've tried looking through Google's developer documentation and accessibility support, but no luck.

https://developer.android.com/guide/topics/ui/accessibility/additional-resources https://support.google.com/accessibility/android#topic=6007234

It would be great to have an app that is better prepared for visually impaired users. Many thanks :-)



from How can I make an Android app compatible with BrailleBack?

Python 3 + Mysql: Incorrect string value '\xF0\x9F\x85\x97\xF0\x9F...'

I have found other questions/answers about "Incorrect string value" here on stack but none of the answers are working so maybe something is different about my case.

try:
    self.cnx = mysql.connector.connect(host='localhost', user='emails', password='***',
                                               database='extractor', raise_on_warnings=True)
except mysql.connector.Error as err:
    if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
        print("Something is wrong with your user name or password")
    elif err.errno == errorcode.ER_BAD_DB_ERROR:
        print("Database does not exist")
    else:
        print(err)
self.sql = self.cnx.cursor()

biography = str(row[8])
self.sql.execute("""insert into emails (biography)
                            values(%s)""",
                                         (biography))

where biography is a utf8mb4_general_ci TEXT column of:

< Living the 🅗🅘🅖🅗 🅛🅘🅕🅔 > Azofra & Clifford Travel Food Fashion

I get:

mysql.connector.errors.DataError: 1366 (22007): Incorrect string value: '\xF0\x9F\x85\x97\xF0\x9F...' for column `extractor`.`emails`.`biography` at row 1


from Python 3 + Mysql: Incorrect string value '\xF0\x9F\x85\x97\xF0\x9F...'

Install latest cairo lib in Ubuntu for weasyprint

I just installed an Ubuntu bionic instance. It comes with cairo 1.14.6 preinstalled. I need at least cairo 1.15.4 for weasyprint to work properly. Unfortunately, even after installing the latest cairo, python still picks up the old library. I would appreciate any clues.

# Install weasyprint dependencies
sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info

# Check cairo lib version, prints "1.15.10-2ubuntu0.1"
dpkg -l '*cairo*'

# Install weasyprint
pip3 install weasyprint

# Test cairo version catch by python, still prints "1.14.6"
python3 -c "import cairocffi; print(cairocffi.cairo_version_string())"


from Install latest cairo lib in Ubuntu for weasyprint

IntersectionObserver rootMargin's positive and negative values are not working

The code for setting the rootMargin is shown below.

let observerOptions = {
    root: null,
    rootMargin: "100px",
    threshold: []
};

When I set it to 100px, the root element's bounding box isn't growing 100px; when I set it to -100px, the root element's bounding box isn't shrinking 100px.

Here is an example on jsFiddle. The example is taken directly from MDN's documentation of IntersectionObserver, and I only changed the value of rootMargin.



from IntersectionObserver rootMargin's positive and negative values are not working

Creating AVComposition with a Blurry Background in iOS

I'm not really sure how to go about asking this question, I would appreciate any helpful feedback on improving this question, I would've preferred to not post a question, but I've had this issue for several weeks now and I have not been able to solve it. I will probably post a bounty when I am eligible to do so. I am trying to make a function that accepts a video URL as input (local video), in turn this function tries to create a video with a blurry background, with the original video at its center and scaled down. My issue is that my code is working fine, aside from when I use videos that are directly recorded from the iPhone camera.

An example of what I am trying to achieve is the following (Taken from my code):

Screenshot of a working example

The input video here is an mp4. I have been able to make the code work as well with mov files that I've downloaded online. But when I use mov files recorded from the iOS camera, I end up with the following:

(How can i post pictures that take less space in the question?)

Screenshot of non working example

Now, the reason I am not sure how to ask this question is because there is a fair amount of code in the process and I haven't been able to fully narrow down the question but I believe it is in the function that I will paste below. I will also post a link to a github repository, where a barebones version of my project has been posted for anyone curious or willing to help. I must confess that the code I am using was originally written by a StackOverflow user named TheTiger on the following question: AVFoundation - Adding blur background to video . I've refactored segments of this, and with their permission, was allowed to post the question here.

My github repo is linked here: GITHUB REPO My demo is set up with 3 different videos, an mp4 downloaded from the web (working), an mov downloaded from the web (Working) and an mov I've recorded on my phone (not working)

The code I imagine is causing the issue is here:

fileprivate func addAllVideosAtCenterOfBlur(asset: AVURLAsset, blurVideo: AVURLAsset, scale: CGFloat, completion: @escaping BlurredBackgroundManagerCompletion) {

    let mixComposition = AVMutableComposition()

    var instructionLayers : Array<AVMutableVideoCompositionLayerInstruction> = []

    let blurVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)

    if let videoTrack = blurVideo.tracks(withMediaType: AVMediaType.video).first {
        let timeRange = CMTimeRange(start: .zero, duration: blurVideo.duration)
        try? blurVideoTrack?.insertTimeRange(timeRange, of: videoTrack, at: .zero)
    }

    let timeRange = CMTimeRange(start: .zero, duration: asset.duration)

    let track = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)

    if let videoTrack = asset.tracks(withMediaType: AVMediaType.video).first {

        try? track?.insertTimeRange(timeRange, of: videoTrack, at: .zero)
        let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track!)

        let properties = scaleAndPositionInAspectFitMode(forTrack: videoTrack, inArea: size, scale: scale)

        let videoOrientation = videoTrack.getVideoOrientation()
        let assetSize = videoTrack.assetSize()

        let preferredTransform = getPreferredTransform(videoOrientation: videoOrientation, assetSize: assetSize, defaultTransform: asset.preferredTransform, properties: properties)

        layerInstruction.setTransform(preferredTransform, at: .zero)

        instructionLayers.append(layerInstruction)
    }

    /// Adding audio
    if let audioTrack = asset.tracks(withMediaType: AVMediaType.audio).first {
        let aTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)
        try? aTrack?.insertTimeRange(timeRange, of: audioTrack, at: .zero)
    }


    /// Blur layer instruction
    let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: blurVideoTrack!)
    instructionLayers.append(layerInstruction)

    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = timeRange
    mainInstruction.layerInstructions = instructionLayers

    let mainCompositionInst = AVMutableVideoComposition()
    mainCompositionInst.instructions = [mainInstruction]
    mainCompositionInst.frameDuration = CMTimeMake(value: 1, timescale: 30)
    mainCompositionInst.renderSize = size

    //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
    let url = self.videoOutputUrl(filename: "finalBlurred")
    try? FileManager.default.removeItem(at: url)

    performExport(composition: mixComposition, instructions: mainCompositionInst, stage: 2, outputUrl: url) { (error) in
        if let error = error {
            completion(nil, error)
        } else {
            completion(url, nil)
        }
    }
}

The getPreferredTransform() function is also quite relevant:

fileprivate func getPreferredTransform(videoOrientation: UIImage.Orientation, assetSize: CGSize, defaultTransform: CGAffineTransform, properties: Properties) -> CGAffineTransform {
    switch videoOrientation {
    case .down:
        return handleDownOrientation(assetSize: assetSize, defaultTransform: defaultTransform, properties: properties)
    case .left:
        return handleLeftOrientation(assetSize: assetSize, defaultTransform: defaultTransform, properties: properties)
    case .right:
        return handleRightOrientation(properties: properties)
    case .up:
        return handleUpOrientation(assetSize: assetSize, defaultTransform: defaultTransform, properties: properties)
    default:
        return handleOtherCases(assetSize: assetSize, defaultTransform: defaultTransform, properties: properties)
    }
}

fileprivate func handleDownOrientation(assetSize: CGSize, defaultTransform: CGAffineTransform, properties: Properties) -> CGAffineTransform {
    let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi/2.0))

    // Scale
    let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)

    // Translate
    var ytranslation: CGFloat = assetSize.height
    var xtranslation: CGFloat = 0
    if properties.position.y == 0 {
        xtranslation = -(assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
    }
    else {
        ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
    }
    let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)

    // Final transformation - Concatination
    let finalTransform = defaultTransform.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
    return finalTransform
}

fileprivate func handleLeftOrientation(assetSize: CGSize, defaultTransform: CGAffineTransform, properties: Properties) -> CGAffineTransform {

    let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi))

    // Scale
    let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)

    // Translate
    var ytranslation: CGFloat = assetSize.height
    var xtranslation: CGFloat = assetSize.width
    if properties.position.y == 0 {
        xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
    } else {
        ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
    }
    let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)

    // Final transformation - Concatination
    let finalTransform = defaultTransform.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)

    return finalTransform
}

fileprivate func handleRightOrientation(properties: Properties) -> CGAffineTransform  {
    let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)

    // Translate
    let translationTransform = CGAffineTransform(translationX: properties.position.x, y: properties.position.y)

    let finalTransform  = scaleTransform.concatenating(translationTransform)
    return finalTransform
}

fileprivate func handleUpOrientation(assetSize: CGSize, defaultTransform: CGAffineTransform, properties: Properties) -> CGAffineTransform {

    return handleOtherCases(assetSize: assetSize, defaultTransform: defaultTransform, properties: properties)
}

fileprivate func handleOtherCases(assetSize: CGSize, defaultTransform: CGAffineTransform, properties: Properties) -> CGAffineTransform {
    let rotateTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2.0))

    let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)

    var ytranslation: CGFloat = 0
    var xtranslation: CGFloat = assetSize.width
    if properties.position.y == 0 {
        xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
    }
    else {
        ytranslation = -(assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
    }
    let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)

    let finalTransform = defaultTransform.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
    return finalTransform
}


from Creating AVComposition with a Blurry Background in iOS