Monday 31 December 2018

Where can I set contentOffset to the last item in UICollectionView before the view appears?

I am scrolling multiple horizontal collectionViews to the last (Far right) item using:

 timelineCollectionView.scrollToMaxContentOffset(animated: false)
   postsCollectionView.scrollToMaxContentOffset(animated: false)

this works great, except I can't figure out where to put them.

Various places I've tried:

viewWillAppear - it doesn't scroll as though the cells aren't fully loaded yet.

viewDidAppear - it does scroll perfectly. Awesome! Except now you can see it scroll even when you put animated: false. The collectionView loads at the far left for a split second before updating to the far right

viewDidLayoutSubviews - this works perfectly - timing and everything! However, it gets called many times. If I could determine which subview was just laid out, perhaps I could scroll it in there but Im not sure how.

Is there a better option? How can I do this?

Also these are the functions I am using to set content offset:

extension UIScrollView {

    var minContentOffset: CGPoint {
        return CGPoint(
            x: -contentInset.left,
            y: -contentInset.top)
    }

    var maxContentOffset: CGPoint {
        return CGPoint(
            x: contentSize.width - bounds.width + contentInset.right,
            y: contentSize.height - bounds.height + contentInset.bottom)
    }

    func scrollToMinContentOffset(animated: Bool) {
        setContentOffset(minContentOffset, animated: animated)
    }

    func scrollToMaxContentOffset(animated: Bool) {
        setContentOffset(maxContentOffset, animated: animated)
    }
}



from Where can I set contentOffset to the last item in UICollectionView before the view appears?

Simple Node/Express app not recognizing Session store

I have an extremely small express app to illustrate a problem I'm having.

I'm using connect-redis as a session store on my express app. I'm having a problem simply connecting to it though. Simply printing out req.session.store results in undefined as shown below:

const session = require('express-session')
const app = require('express')()
const RedisStore = require('connect-redis')(session);

const isDev = process.env.NODE_ENV !== 'production'

app.use(session({
  store: new RedisStore({
    host: 'localhost',
    port: 6379
  }),
  secret: 'super-secret-key', // TBD: grab from env
  resave: false,
  saveUninitialized: false,
  cookie: {
    maxAge: 1000 * 60 * 60 * 24,
    secure: !isDev, // require HTTPS in production
  }
}))

app.get('/', (req, res) => {
  console.log('**testing**')
  console.log(req.session.store)
  res.send('rendering this text')
})

app.listen(3001, () => {
  console.log('listening on 3001')
})


The output of this is:

listening on 3001
**testing**
undefined

In the store property, I've tried connecting via the url property and the client property in addition to the current host and port method.

I'm not seeing any errors either so I'm not sure why the store is always undefined.

What am I missing with this?

also, I did start redis using redis-server and I am able to connect to it through other clients


Update 1
Also, I did look at this StackOverflow which some found useful. The issue is, from version 1.5> of express-session, it says you don't need the cookie-parser module, so I don't believe that's the issue.



from Simple Node/Express app not recognizing Session store

Vue.js CLI 3 - how can I create a vendor bundle for css/sass?

Using @vue/cli 3.x and am modifying my vue.config.js slightly. I'm wanting to have separate css files such as app.css and vendor.css (transpiled from sass) - similar to how its configured to treat the JavaScript. I am unsure how to set the proper config to achieve this. Am I loading my files incorrectly? Missing the mark entirely?

// vue.config.js
module.exports = {
  // [...]
  configureWebpack: {
    optimization: {
      splitChunks: {
        cacheGroups: {
          shared: {
            test: /[\\/]node_modules[\\/]/,
            name: 'vendor',
            enforce: true,
            chunks: 'all',
          }
        }
      }
    }
  }
};

My where build results in...

dist
├── css
|   └── app.css
├── js
|   ├── app.js
|   └── vendor.js

app.css includes all I've imported through my node_modules. My style import is as follows in my main App.vue component...

<style lang="scss">
  @import '../node_modules/minireset.css/minireset.sass';
</style>

// [...]

My desired result is the following structure, where the "vendor" css/sass is extracted out...

dist
├── css
|   ├── app.css
|   └── vendor.css
├── js
|   ├── app.js
|   └── vendor.js


I'v looked into the MiniCssExtractPlugin where the first sentences states the following...

This plugin extracts CSS into separate files

But I've found no examples of how to do it idiomatically in the Vue.js ecosystem. I've also tried to add the following to my vue.config.js, but nothing seems to take effect...

module.exports = {
  // [...]
  css: {
    extract: {
      filename: 'css/[name].css',
      chunkFilename: 'css/[name].css',
    },
  },
};


I've also found what should have been a home run explanation in the Vue SSR Guide | CSS Management, but it uses webpack.optimize.CommonsChunkPlugin which has been deprecated in favor of webpack.optimize. SplitChunksPlugin, throwing a build error...

Error: webpack.optimize.CommonsChunkPlugin has been removed, please use config.optimization.splitChunks instead.



from Vue.js CLI 3 - how can I create a vendor bundle for css/sass?

Vue.js, how can I create a vendor bundle for css/sass?

Using @vue/cli 3.x and am modifying my vue.config.js slightly. I'm wanting to have separate css files such as app.css and vendor.css (transpiled from sass) - similar to how its configured to treat the JavaScript. I am unsure how to set the proper config to achieve this. Am I loading my files incorrectly? Missing the mark entirely?

// vue.config.js
module.exports = {
  // [...]
  configureWebpack: {
    optimization: {
      splitChunks: {
        cacheGroups: {
          shared: {
            test: /[\\/]node_modules[\\/]/,
            name: 'vendor',
            enforce: true,
            chunks: 'all',
          }
        }
      }
    }
  }
};

My where build results in...

dist
├── css
|   └── app.css
├── js
|   ├── app.js
|   └── vendor.js

app.css includes all I've imported through my node_modules. My style import is as follows in my main App.vue component...

<style lang="scss">
  @import '../node_modules/minireset.css/minireset.sass';
</style>

// [...]

My desired result is the following structure, where the "vendor" css/sass is extracted out...

dist
├── css
|   ├── app.css
|   └── vendor.css
├── js
|   ├── app.js
|   └── vendor.js


I'v looked into the MiniCssExtractPlugin where the first sentences states the following...

This plugin extracts CSS into separate files

But I've found no examples of how to do it idiomatically in the Vue.js ecosystem. I've also tried to add the following to my vue.config.js, but nothing seems to take effect...

module.exports = {
  // [...]
  css: {
    extract: {
      filename: 'css/[name].css',
      chunkFilename: 'css/[name].css',
    },
  },
};


I've also found what should have been a home run explanation in the Vue SSR Guide | CSS Management, but it uses webpack.optimize.CommonsChunkPlugin which has been deprecated in favor of webpack.optimize. SplitChunksPlugin, throwing a build error...

Error: webpack.optimize.CommonsChunkPlugin has been removed, please use config.optimization.splitChunks instead.



from Vue.js, how can I create a vendor bundle for css/sass?

Translate vuejs route paths

I got this fine idea about translating routes paths, which doesn't sound so clever any more :), once I have encountered a problem. I hope you guys will see/find a solution.

This is my routes.js file, where routes are defined

export default [
    {
        path: '/:lang',
        component: {
            template: '<router-view></router-view>'
        },
        children: [
            {
                path: '',
                name: 'Home',
                component: load('Home')
            },
            {
                path: translatePath('contact'),
                name: 'Contact',
                component: load('Contact')
            },
            {
                path: translatePath('cookiePolicy'),
                name: 'CookiePolicy',
                component: load('CookiePolicy')
            },
        ]
    },
]

// and my simple function for translating paths
function translatePath(path) {
    let lang = Cookie.get('locale');

    let pathTranslations = {
        en: {
            contact: 'contact',
            cookiePolicy: 'cookie-policy',
        },
        sl: {
            contact: 'kontakt',
            cookiePolicy: 'piskotki',
        }
    };

    return pathTranslations[lang][path];
}

And this is my change language functionality in my component

setLocale(locale) {
    let selectedLanguage = locale.toLowerCase();
    this.$my.locale.setLocale(selectedLanguage); // update cookie locale
    console.log(this.$route.name);
    this.$router.replace({ name: this.$route.name, params: { lang: selectedLanguage } });
    location.reload();
},

The problem is following. When user executes the change language functionality I successfully change lang param, but the this.$route.name keeps the same in old language. Is there a way to "reload" routes, so there will be new routes paths, which will include proper language?

If you need any additional informations, please let me know and I will provide. Thank you!



from Translate vuejs route paths

Is there an easy way to overwrite base directive?

I'm looking for the simple way to overwrite behaviour of directives provided out of box, for example ngIf

So I can make a child directive and extend the behaviour and after declare as a native one.

P.S.: I know that overwriting standard functionality is VERY BAD practice, but I'm doing it just for study/research purposes.



from Is there an easy way to overwrite base directive?

UITableViewCell with left and right labels. How to make them display correctly?

I have an UITableViewCell with 2 labels which can have different content. Sometimes the left label is very big and the right label is small, or empty or sometimes the right label contains a lot of information.

Is it possible to make them display correctly (i.e. no label should be truncated and the hight of the labels should be as small as possible) only by playing with the constraints and content hugging/compression resistance priorities?

I already tried adding constraints for minimum width, or changing the priorities for compression and hugging to 1000, but I always have some issues like either the text is truncated (see screenshot) or one of the labels is displayed on 10 lines and the other on only one line (see the second screenshot).

Here is some sample data that I'm playing with (demo project available here https://github.com/adi2004/iosamples/tree/master/TableView):

    let data = [
    (left: "left one two three four five", right: "7"),
    (left: "left one two three four five 6 7 more here", right: "right one two three four five 6 7"),
    (left: "left one two three four five 6 7", right: "right one two three four five 6 7 something"),
    (left: "6 = ", right: "right one two three four five 6 7"),
    (left: "left one two three four five 6 7 right one two three four five 6 7 eight right one two three four five 6 7 eight", right: "")
]

Here are some samples of the issues I'm facing: Imgur

-- or --

Imgur



from UITableViewCell with left and right labels. How to make them display correctly?

How to load a sparse matrix efficiently?

Given a file with this structure:

  • Single column lines are keys
  • Non-zero values of the keys

For example:

abc
ef 0.85
kl 0.21
xyz 0.923
cldex 
plax 0.123
lion -0.831

How to create a sparse matrix, csr_matrix?

('abc', 'ef') 0.85
('abc', 'kl') 0.21
('abc', 'xyz') 0.923
('cldex', 'plax') 0.123
('cldex', 'lion') -0.31

I've tried:

from collections import defaultdict

x = """abc
ef  0.85
kl  0.21
xyz 0.923
cldex 
plax    0.123
lion    -0.831""".split('\n')

k1 = ''
arr = defaultdict(dict)
for line in x:
    line = line.strip().split('\t')
    if len(line) == 1:
        k1 = line[0]
    else:
        k2, v = line
        v = float(v)
        arr[k1][k2] = v

[out]

>>> arr
defaultdict(dict,
            {'abc': {'ef': 0.85, 'kl': 0.21, 'xyz': 0.923},
             'cldex': {'plax': 0.123, 'lion': -0.831}})

Having the nested dict structure isn't as convenient as the scipy sparse matrix structure.

Is there a way to read the file in the given format above easily into any of the scipy sparse matrix object?



from How to load a sparse matrix efficiently?

ArrayBuffer.getElementSlowPath Does anyone face this error?

In my project I show library image and video to user but in some device I got the crash like ArrayBuffer.getElementSlowPath. Can anyone guide me how i can replicate this issue? I got this issue from Crashlytics.

enter image description here

Here is my code for get videos from phassests.

 func getVideo(withCompletionHandler completion:@escaping CompletionHandler)  {
        let fetchOptions = PHFetchOptions()
        let requestOptions = PHVideoRequestOptions()
        fetchOptions.sortDescriptors = [NSSortDescriptor(key:"creationDate", ascending: false)]
        let fetchResult: PHFetchResult = PHAsset.fetchAssets(with: PHAssetMediaType.video, options: fetchOptions)

        fetchResult.enumerateObjects ({ (assest, index, isCompleted) in
            if assest.sourceType != PHAssetSourceType.typeiTunesSynced{
                PHImageManager.default().requestAVAsset(forVideo: assest , options: requestOptions, resultHandler: { (asset : AVAsset?, video : AVAudioMix?, dic : [AnyHashable : Any]?) in
                    if let _ = asset as? AVURLAsset
                    {
                        let objAssest = GallaryAssets()
                        objAssest.objAssetsType = assetsType.videoType
                        objAssest.createdDate = (assest ).creationDate
                        objAssest.assetsDuration = (assest ).duration
                        objAssest.assetsURL = (asset as! AVURLAsset).url
                        objAssest.localizationStr = assest.localIdentifier
                        objAssest.locationInfo = LocationInfo()
                        if let location = (assest).location
                        {
                            objAssest.locationInfo.Latitude = "\(location.coordinate.latitude)"
                            objAssest.locationInfo.Longitude = "\(location.coordinate.longitude)"
                        }

                        self.media.add(objAssest)
                    }
                    completion(self.media)
                   }                    
                })
            }
        })
    }
}



from ArrayBuffer.getElementSlowPath Does anyone face this error?

What's causing so much overhead in Google BigQuery query?

I am running the following function to profile a BigQuery query:

# q = "SELECT * FROM bqtable LIMIT 1'''

def run_query(q):
    t0 = time.time()
    client = bigquery.Client()
    t1 = time.time()
    res = client.query(q)
    t2 = time.time()
    results = res.result()
    t3 = time.time()
    records = [_ for _ in results]
    t4 = time.time()
    print (records[0])
    print ("Initialize BQClient: %.4f | ExecuteQuery: %.4f | FetchResults: %.4f | PrintRecords: %.4f | Total: %.4f | FromCache: %s" % (t1-t0, t2-t1, t3-t2, t4-t3, t4-t0, res.cache_hit))

And, I get something like the following:

Initialize BQClient: 0.0007 | ExecuteQuery: 0.2854 | FetchResults: 1.0659 | PrintRecords: 0.0958 | Total: 1.4478 | FromCache: True

I am running this on a GCP machine and it is only fetching ONE result in location US (same region, etc.), so the network transfer should (I hope?) be negligible. What's causing all the overhead here?

I tried this on the GCP console and it says the cache hit takes less than 0.1s to return, but in actuality, it's over a second. Here is an example video to illustrate: https://www.youtube.com/watch?v=dONZH1cCiJc.

Notice for the first query, for example, it says it returned in 0.253s from cache:

enter image description here

However, if you view the above video, the query actually STARTED at 7 seconds and 3 frames --

enter image description here

And it COMPLETED at 8 seconds and 13 frames --

enter image description here

That is well over a second -- almost a second and a half!! That number is similar to what I get when I execute a query from the command-line in python.


So why then does it report that it only took 0.253s when in actuality, to do the query and return the one result, it takes over five times that amount?

In other words, it seems like there's about a second overhead REGARDLESS of the query time (which are not noted at all in the execution details). Are there any ways to reduce this time?



from What's causing so much overhead in Google BigQuery query?

Understanding aiohttp.TCPConnector pooling & connection limits

I am experimenting with the limit and limit_per_host parameters to aiohttp.connector.TCPConnector.

In the script below, I pass connector = aiohttp.connector.TCPConnector(limit=25, limit_per_host=5) to aiohttp.ClientSession, then open 2 requests to docs.aiohttp.org and 3 to github.com.

The result of session.request is an instance of aiohttp.ClientResponse, and in this example I intentionally do not call .close() on it, either via .close() or __aexit__. I would assume this would leave the connection pool open and decrease the available connections to that (host, ssl, port) triple by -1.

The table below represents the ._available_connections() after each request. Why does the number hang at 4 even after completing the 2nd request to docs.aiohttp.org? Both of these connections are presumably still open and haven't accessed ._content yet or been closed. Shouldn't the available connections decrease by 1?

After Request Num.        To                    _available_connections
1                         docs.aiohttp.org      4
2                         docs.aiohttp.org      4   <--- Why?
3                         github.com            4
4                         github.com            3
5                         github.com            2

Furthermore, why does ._acquired_per_host only ever contain 1 key? I guess I may be understanding the methods of TCPConnector; what explains the behavior above?

Full script:

import aiohttp


async def main():
    connector = aiohttp.connector.TCPConnector(limit=25, limit_per_host=5)

    print("Connector arguments:")
    print("_limit:", connector._limit)
    print("_limit_per_host:", connector._limit_per_host)
    print("-" * 70, end="\n\n")

    async with aiohttp.client.ClientSession(
        connector=connector,
        headers={"User-Agent": "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36"},
        raise_for_status=True
    ) as session:

        # Make 2 connections to docs.aiohttp.org and 
        #      3 connections to github.com
        #
        # Note that these instances intentionally do not use
        # .close(), either explicitly or via __aexit__
        # in an async with block

        r1 = await session.request(
            "GET",
            "https://docs.aiohttp.org/en/stable/client_reference.html#connectors"
        )
        print_connector_attrs("r1", session)

        r2 = await session.request(
            "GET",
            "https://docs.aiohttp.org/en/stable/index.html"
        )
        print_connector_attrs("r2", session)

        r3 = await session.request(
            "GET",
            "https://github.com/aio-libs/aiohttp/blob/master/aiohttp/client.py"
        )
        print_connector_attrs("r3", session)

        r4 = await session.request(
            "GET",
            "https://github.com/python/cpython/blob/3.7/Lib/typing.py"
        )
        print_connector_attrs("r4", session)

        r5 = await session.request(
            "GET",
            "https://github.com/aio-libs/aiohttp"
        )
        print_connector_attrs("r5", session)


def print_connector_attrs(name: str, session: aiohttp.client.ClientSession):
    print("Connection attributes for", name, end="\n\n")
    conn = session._connector
    print("_conns:", conn._conns, end="\n\n")
    print("_acquired:", conn._acquired, end="\n\n")
    print("_acquired_per_host:", conn._acquired_per_host, end="\n\n")
    print("_available_connections:")
    for k in conn._acquired_per_host:
        print("\t", k, conn._available_connections(k))
    print("-" * 70, end="\n\n")


if __name__ == "__main__":
    import asyncio
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())

The output is pasted at https://pastebin.com/rvfzMTe3. I've put it there rather than here because the lines are long and not very wrap-able.



from Understanding aiohttp.TCPConnector pooling & connection limits

Why doesn't my google sign in window load?

Whenever I try to sign in by launching the google sign-in intent, it directly goes to the onActivityResult without me having the chance to choose an account. It just dimms the screen, but the window to select an account doesn't show up The Login then fails with this ApiException:

java.lang.ClassNotFoundException: com.google.android.gms.common.api.Scope

and

java.lang.RuntimeException: Canvas: trying to draw too large(256000000bytes) bitmap.

(full stack trace: https://pastebin.com/vBZeBLu0)

All of my used dependencies are up to date and my credentials (oAuth client-id) are all set up correctly, I tried the solutions of other similar problems, but none of them solved my issue, i also checked if the user is logged out completely from the device and the issue kept reaccuring.

This is my Log-in Activity:

public class Login extends Activity implements GoogleApiClient.OnConnectionFailedListener, GoogleApiClient.ConnectionCallbacks  {                                   

private static final String TAG = "LoginProcess";

SignInButton gsignInButton;
private static final int RC_SIGN_IN = 1;
DatabaseReference mRef;
FirebaseAuth mAuth;
FirebaseAuth.AuthStateListener mAuthListener;
GoogleSignInOptions gso;
GoogleApiClient mGoogleApiClient;


@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.welcomescreenlogin);

    mAuth = FirebaseAuth.getInstance();



    gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
            .requestIdToken(getString(R.string.default_web_client_id))
            .requestEmail()
            .build();

mGoogleSignInClient = GoogleSignIn.getClient(this, gso);

    gsignInButton = findViewById(R.id.sib);

    gsignInButton.setColorScheme(SignInButton.COLOR_DARK); // wide button style
    gsignInButton.setOnClickListener(myhandler);


}

View.OnClickListener myhandler = new View.OnClickListener() {
    public void onClick(View v) {
       signIn();
    }

};



public void signIn() {

    Intent signInIntent = mGoogleSignInClient.getSignInIntent();
    startActivityForResult(signInIntent, RC_SIGN_IN);
}

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);

    if (requestCode == RC_SIGN_IN) {
        Task<GoogleSignInAccount> task = GoogleSignIn.getSignedInAccountFromIntent(data);
        try {
            // Google Sign In was successful, authenticate with Firebase
            GoogleSignInAccount account = task.getResult(ApiException.class);
            firebaseAuthWithGoogle(account);
        } catch (ApiException e) {
            // Google Sign In failed, update UI appropriately
            Log.w(TAG, "Google sign in failed", e);  //this is where it always lands.
            Toast.makeText(this, "login failed", Toast.LENGTH_SHORT).show();
            // ...
        }
    }

}

full code for Login Activity: https://pastebin.com/6Yi7vzD7

Gradle:

apply plugin: 'com.android.application'

android {
    compileSdkVersion 28
    buildToolsVersion '28.0.3'

    defaultConfig {
        applicationId "com.example.sanchez.worldgramproject"
        minSdkVersion 21
        targetSdkVersion 28
        multiDexEnabled true
        versionCode 0
        versionName "0"


    }
    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
        debug {
            debuggable true
        }
    }
}



dependencies {

    compile fileTree(dir: 'libs', include: ['*.jar'])
    api "com.google.android.material:material:1.0.0"

    implementation 'com.github.madrapps:pikolo:1.1.6'
    implementation 'com.google.android.gms:play-services-drive:16.0.0'
    implementation 'com.google.android.material:material:1.1.0-alpha02'
    implementation 'com.github.bumptech.glide:glide:3.8.0'
    implementation'com.firebaseui:firebase-ui-storage:2.3.0'
    implementation 'com.google.firebase:firebase-auth:16.1.0'
    implementation 'com.google.android.gms:play-services-auth:16.0.1'
    implementation 'androidx.appcompat:appcompat:1.0.2'
    implementation 'androidx.cardview:cardview:1.0.0'
    implementation 'androidx.recyclerview:recyclerview:1.0.0'
    implementation 'com.jakewharton:butterknife:8.8.1'
    implementation 'androidx.multidex:multidex:2.0.1'
    implementation 'pl.droidsonroids.gif:android-gif-drawable:1.2.6'
    implementation 'de.hdodenhof:circleimageview:2.2.0'
    implementation 'androidx.legacy:legacy-support-v4:1.0.0'
    implementation 'androidx.exifinterface:exifinterface:1.0.0'
    implementation 'com.google.firebase:firebase-storage:16.0.5'
    implementation 'com.google.android.gms:play-services-maps:16.0.0'
    implementation 'com.google.firebase:firebase-database:16.0.5'
    testImplementation 'junit:junit:4.12'

}


apply plugin: 'com.google.gms.google-services'

I have no idea what the cause of the problem is, how can i solve this issue and make the account selection window pop up?



from Why doesn't my google sign in window load?

Is there a way to call a python code in Excel-VBA?

I have an excel file(Main.xlsm) containing macros. I have a python file(python.py) to generate a subsidiary excel file(sub.xlsx) which i would further call in the macros of Main.xlsm file. This sub.xlsx file which is generated by the run of python.py is saved in the same working directory.

Now I want to make this python.py to be executed during the run of the Main.xlsm macros and then use this xlsx file. I basically want to reduce the step of executing python.py externally. Is there a command for that? I am new to VBA.



from Is there a way to call a python code in Excel-VBA?

Sunday 30 December 2018

How to automatically apply argument to class constructor?

I want to instead of writing following:

map((value) => new User(value))

Somehow write something like this:

map(new User)


User is es6 class:

class  User {
  constructor(public name: string) {}
}

const map = <T, R>(project: (value: T) => R) => {}

I am not sure if this is possible or not.



from How to automatically apply argument to class constructor?

GraphQL: How nested to make schema?

This past year I converted an application to use Graphql. Its been great so far, during the conversion I essentially ported all my services that backed my REST endpoints to back grapqhl queries and mutations. The app is working well but would like to continue to evolve my object graph.

Lets consider I have the following relationships.

User -> Team -> Boards -> Lists -> Cards -> Comments

I currently have two different nested schema: User -> team:

    type User {
  id: ID!
  email: String!
  role: String!
  name: String!
  resetPasswordToken: String
  team: Team!
  lastActiveAt: Date
}

type Team {
  id: ID!
  inviteToken: String!
  owner: String!
  name: String!
  archived: Boolean!
  members: [String]
}

Then I have Boards -> Lists -> Cards -> Comments

type Board {
  id: ID!
  name: String!
  teamId: String!
  lists: [List]
  createdAt: Date
  updatedAt: Date
}

type List {
  id: ID!
  name: String!
  order: Int!
  description: String
  backgroundColor: String
  cardColor: String
  archived: Boolean
  boardId: String!
  ownerId: String!
  teamId: String!
  cards: [Card]
}

type Card {
  id: ID!
  text: String!
  order: Int
  groupCards: [Card]
  type: String
  backgroundColor: String
  votes: [String]
  boardId: String
  listId: String
  ownerId: String
  teamId: String!
  comments: [Comment]
  createdAt: Date
  updatedAt: Date
}

type Comment {
  id: ID!
  text: String!
  archived: Boolean
  boardId: String!
  ownerId: String
  teamId: String!
  cardId: String!
  createdAt: Date
  updatedAt: Date
}

Which works great. But I'm curious how nested I can truly make my schema. If I added the rest to make the graph complete:

type Team {
      id: ID!
      inviteToken: String!
      owner: String!
      name: String!
      archived: Boolean!
      members: [String]
      **boards: [Board]**
    }

This would achieve a much much deeper graph. However I worried how much complicated mutations would be. Specifically for the board schema downwards I need to publish subscription updates for all actions. Which if I add a comment, publish the entire board update is incredibly inefficient. While built a subscription logic for each create/update of every nested schema seems like a ton of code to achieve something simple.

Any thoughts on what the right depth is in object graphs? With keeping in mind the every object beside a user needs to be broadcast to multiple users.

Thanks



from GraphQL: How nested to make schema?

babel-loader not working, giving error on UglifyJS (ES6)

I have implemented the FlipClockJs vue component and it works fine when I run

yarn encore dev

However, as soon as I run

yarn encore production

I get the following error

ERROR Failed to compile with 1 errors12:30:24 PM

error

app.bc30a410.js from UglifyJs Unexpected token: operator (>) [app.bc30a410.js:12470,21]

I tried resolving this by adding this to my webpack file:

  .addLoader({
    test: /\.js$/,
    loader: "babel-loader",
    include: ['node_modules/@mvpleung/flipclock']
  })

But this just gives me the same result. My entire webpack file looks like this:

var Encore = require("@symfony/webpack-encore");
const { VueLoaderPlugin } = require("vue-loader");
const MinifyPlugin = require('babel-minify-webpack-plugin');

Encore.setOutputPath("public/build/")
  .setPublicPath("/build")

  .addEntry("app", "./resources/assets/js/app.js")

  .cleanupOutputBeforeBuild()
  .enableSourceMaps(!Encore.isProduction())
  .enableVersioning(Encore.isProduction())

  .addLoader({
test: /\.vue$/,
loader: "vue-loader"
  })
  .addLoader({
test: /\.js$/,
loader: 'babel-loader',
include: ['/node_modules/@mvpleung/flipclock']
  })
  .addLoader({
test: /\.(js|vue)$/,
enforce: "pre",
loader: "eslint-loader",
exclude: /node_modules/,
options: {
  fix: true
}
  })
  .addPlugin(new VueLoaderPlugin())
  .addPlugin(new MinifyPlugin())
  .addAliases({
vue: "vue/dist/vue.js"
  })

  .enableSassLoader()
  .enablePostCssLoader()
;

module.exports = Encore.getWebpackConfig();

Any idea what might be wrong here? The component works fine when running yarn encore dev.

Using Vue 2.5.17



from babel-loader not working, giving error on UglifyJS (ES6)

Register customizable environment variables with anaconda

By running conda env export > environment.yml I can make it easy for people to clone and replicate my environment.

But I also need them to set some environment variables. When using PHP (Laravel), I had a .env file (ignored by git) where the user could put account details, passwords, tokens etc. A file .env.example was provided allowing the user to see the required values. So I implemented that with a python class but it was frowned upon in r/learnpython ("...to give your user rope to hang themselves with").

After further reading I did a file activate in my project root

export \
    GITHUB_ACCESS_TOKEN="your value goes here", 
    BENNO="test",

So the user now just runs source activate to register the variables. But I see several problems

  • activate is committed, how to protect the user from accidently publishing this?
  • After exiting my conda environment, the variable GITHUB_ACCESS_TOKEN was still active. I expected the conda environment to keep a separate set of environment variables?
  • The user have to run the activate script every time they relaunch the terminal
  • The activation script does not support windows usage
  • The principle still is the same as the .env.example in PHP which is bad??

To summarize I would like a clean simple way to store both the dependencies AND customizable environment vars, allowing for simple installation for conda users, but also if possible a wider set of python users. What are some good practices here? Can I somehow list the vars in environment.yml?



from Register customizable environment variables with anaconda

Flask, Jinja2, Babel error on "$" character

I'm migrated my code from webapp2 to Flask. (I deploy my code in Google App Engine)

However, I can no longer use this string: "Error: Max %1d characters"

Initialization

flask_app = Flask(__name__)
babel = Babel(flask_app, default_domain='strings')

Html template

<div class="..."></div>

I know that this is not the best use, but I need to keep %1$d as placeholder. (It was working with webapp2)

Log:

...
File ".../libs/flask/templating.py", line 135, in render_template
context, ctx.app)
File ".../libs/flask/templating.py", line 117, in _render
rv = template.render(context)
File ".../libs/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File ".../libs/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File ".../app/templates/filename.html", line 567, in top-level template code
<div class="invalid-feedback"></div>
ValueError: unsupported format character '$' (0x24) at index 29

I've already tried to use the "| e" or "| safe" after " _('error_long_value')" in the HTML template, removing the replace().



from Flask, Jinja2, Babel error on "$" character

$("body").on("drop") with Dropzone.js

I have a dropzone that works great. I also have a few elements as additional "targets" where the user can drop files.

Problem is, I don't know how to "forward" the dropped files from the target element to the dropzone for upload. I'd prefer something such as the following code sample over spawning multiple dropzones, as that feels very hacky in this scenario, given that in the future the number of target-drop elements may be 10 or 15+.

"drop": function(e) {

    e.preventDefault();

    myDropzone.upload(e.originalEvent.dataTransfer); // Any way to do something like this?

}



from $("body").on("drop") with Dropzone.js

$("body").on("drop") with Dropzone.js

I have a dropzone that works great. I also have a few elements as additional "targets" where the user can drop files.

Problem is, I don't know how to "forward" the dropped files from the target element to the dropzone for upload. I'd prefer something such as the following code sample over spawning multiple dropzones, as that feels very hacky in this scenario, given that in the future the number of target-drop elements may be 10 or 15+.

"drop": function(e) {

    e.preventDefault();

    myDropzone.upload(e.originalEvent.dataTransfer); // Any way to do something like this?

}



from $("body").on("drop") with Dropzone.js

Python multiprocessing: understanding logic behind `chunksize`

What factors determine an optimal chunksize argument to methods like multiprocessing.Pool.map()?

From the docs:

The chunksize argument is the same as the one used by the map() method. For very long iterables using a large value for chunksize can make the job complete much faster than using the default value of 1.

Example - say that I am:

  • Passing an iterable to .map() that has ~15 million elements;
  • Working on a machine with 24 cores and using the default processes = os.cpu_count() within multiprocessing.Pool().

My naive thinking is to give each of 24 workers an equally-sized chunk, i.e. 15_000_000 / 24 or 625,000. Large chunks should reduce turnover/overhead while fully utilizing all workers. But it seems that this is missing some potential downsides of giving large batches to each worker. Is this an incomplete picture, and what am I missing?


Part of my question stems from the default logic for if chunksize=None: both .map() and .starmap() call .map_async(), which looks like this:

def _map_async(self, func, iterable, mapper, chunksize=None, callback=None,
               error_callback=None):
    # ... (materialize `iterable` to list if it's an iterator)
    if chunksize is None:
        chunksize, extra = divmod(len(iterable), len(self._pool) * 4)  # ????
        if extra:
            chunksize += 1
    if len(iterable) == 0:
        chunksize = 0

What's the logic behind divmod(len(iterable), len(self._pool) * 4)? This implies that the chunksize will be closer to 15_000_000 / (24 * 4) == 156_250. What's the intention in multiplying len(self._pool) by 4?

This makes the resulting chunksize a factor of 4 smaller than my "naive logic" from above, which consists of just dividing the length of the iterable by number of workers in pool._pool.


Related answer that is helpful but a bit too high-level: Python multiprocessing: why are large chunksizes slower?.



from Python multiprocessing: understanding logic behind `chunksize`

Flask Celery task locking

I am using Flask with Celery and I am trying to lock a specific task so that it can only be run one at a time. In the celery docs it gives a example of doing this Celery docs, Ensuring a task is only executed one at a time. This example that was given was for Django however I am using flask I have done my best to convert this to work with Flask however I still see myTask1 which has the lock can be run multiple times.

One thing that is not clear to me is if I am using the cache correctly, I have never used it before so all of it is new to me. One thing from the doc's that is mentioned but not explained is this

Doc Notes:

In order for this to work correctly you need to be using a cache backend where the .add operation is atomic. memcached is known to work well for this purpose.

Im not truly sure what that means, should i be using the cache in conjunction with a database and if so how would I do that? I am using mongodb. In my code I just have this setup for the cache cache = Cache(app, config={'CACHE_TYPE': 'simple'}) as that is what was mentioned in the Flask-Cache doc's Flask-Cache Docs

Another thing that is not clear to me is if there is anything different I need to do as I am calling my myTask1 from within my Flask route task1

Here is an example of my code that I am using.

from flask import (Flask, render_template, flash, redirect,
                   url_for, session, logging, request, g, render_template_string, jsonify)
from flask_caching import Cache
from contextlib import contextmanager
from celery import Celery
from Flask_celery import make_celery
from celery.result import AsyncResult
from celery.utils.log import get_task_logger
from celery.five import monotonic
from flask_pymongo import PyMongo
from hashlib import md5
import pymongo
import time


app = Flask(__name__)

cache = Cache(app, config={'CACHE_TYPE': 'simple'})
app.config['SECRET_KEY']= 'super secret key for me123456789987654321'

######################
# MONGODB SETUP
#####################
app.config['MONGO_HOST'] = 'localhost'
app.config['MONGO_DBNAME'] = 'celery-test-db'
app.config["MONGO_URI"] = 'mongodb://localhost:27017/celery-test-db'


mongo = PyMongo(app)


##############################
# CELERY ARGUMENTS
##############################


app.config['CELERY_BROKER_URL'] = 'amqp://localhost//'
app.config['CELERY_RESULT_BACKEND'] = 'mongodb://localhost:27017/celery-test-db'

app.config['CELERY_RESULT_BACKEND'] = 'mongodb'
app.config['CELERY_MONGODB_BACKEND_SETTINGS'] = {
    "host": "localhost",
    "port": 27017,
    "database": "celery-test-db", 
    "taskmeta_collection": "celery_jobs",
}

app.config['CELERY_TASK_SERIALIZER'] = 'json'


celery = Celery('task',broker='mongodb://localhost:27017/jobs')
celery = make_celery(app)


LOCK_EXPIRE = 60 * 2  # Lock expires in 2 minutes


@contextmanager
def memcache_lock(lock_id, oid):
    timeout_at = monotonic() + LOCK_EXPIRE - 3
    # cache.add fails if the key already exists
    status = cache.add(lock_id, oid, LOCK_EXPIRE)
    try:
        yield status
    finally:
        # memcache delete is very slow, but we have to use it to take
        # advantage of using add() for atomic locking
        if monotonic() < timeout_at and status:
            # don't release the lock if we exceeded the timeout
            # to lessen the chance of releasing an expired lock
            # owned by someone else
            # also don't release the lock if we didn't acquire it
            cache.delete(lock_id)



@celery.task(bind=True, name='app.myTask1')
def myTask1(self):

    self.update_state(state='IN TASK')

    lock_id = self.name

    with memcache_lock(lock_id, self.app.oid) as acquired:
        if acquired:
            # do work if we got the lock
            print('acquired is {}'.format(acquired))
            self.update_state(state='DOING WORK')
            time.sleep(90)
            return 'result'

    # otherwise, the lock was already in use
    raise self.retry(countdown=60)  # redeliver message to the queue, so the work can be done later



@celery.task(bind=True, name='app.myTask2')
def myTask2(self):
    print('you are in task2')
    self.update_state(state='STARTING')
    time.sleep(120)
    print('task2 done')


@app.route('/', methods=['GET', 'POST'])
def index():

    return render_template('index.html')

@app.route('/task1', methods=['GET', 'POST'])
def task1():

    print('running task1')
    result = myTask1.delay()

    # get async task id
    taskResult = AsyncResult(result.task_id)


    # push async taskid into db collection job_task_id
    mongo.db.job_task_id.insert({'taskid': str(taskResult), 'TaskName': 'task1'})

    return render_template('task1.html')


@app.route('/task2', methods=['GET', 'POST'])
def task2():

    print('running task2')
    result = myTask2.delay()

    # get async task id
    taskResult = AsyncResult(result.task_id)

    # push async taskid into db collection job_task_id
    mongo.db.job_task_id.insert({'taskid': str(taskResult), 'TaskName': 'task2'})

    return render_template('task2.html') 


@app.route('/status', methods=['GET', 'POST'])
def status():

    taskid_list = []
    task_state_list = []
    TaskName_list = []

    allAsyncData = mongo.db.job_task_id.find()

    for doc in allAsyncData:
        try:
            taskid_list.append(doc['taskid'])
        except:
            print('error with db conneciton in asyncJobStatus')

        TaskName_list.append(doc['TaskName'])

    # PASS TASK ID TO ASYNC RESULT TO GET TASK RESULT FOR THAT SPECIFIC TASK
    for item in taskid_list:
        try:
            task_state_list.append(myTask1.AsyncResult(item).state)
        except:
            task_state_list.append('UNKNOWN')

    return render_template('status.html', data_list=zip(task_state_list, TaskName_list))



from Flask Celery task locking

How Wifi and Mobile Data both work simultaneously in android for OBD2 device

I'm developing application which connect OBD2 device by Wifi and app can read Speed,RPM,Engine coolant temperature details etc in android.So wifi is used only for connecting with OBD2 device(it doesn't have facility to connect with internet,only for communicating) and communicate with it.Now I need internet connection for web services.But after connecting my wifi I am not able to connect internet via my mobile data network in android.

The similar application is also developed for iOS. Now in iOS, I can use application by Wifi (Static Wifi setting) and Internet connection from my cellular network. It means configure my wifi with some static ip I am able to use mobile data network for Internet connection in iOS.

But in Android, If I use static wifi and check for Internet connection, it is not available.

How can I use Wifi and Internet connection both run parallel or any other way by configuring wifi settings in android ? Any help would be appreciated.



from How Wifi and Mobile Data both work simultaneously in android for OBD2 device

SignalR Core - Error: Websocket closed with status code: 1006

I use SignalR in an Angular app. When I destroy component in Angular I also want to stop connection to the hub. I use the command:

this.hubConnection.stop();

But I get an error in Chrome console: Websocket closed with status code: 1006

In Edge: ERROR Error: Uncaught (in promise): Error: Invocation canceled due to connection being closed. Error: Invocation canceled due to connection being closed.

It actually works and connection has been stopped, but I would like to know why I get the error.

This is how I start the hub:

this.hubConnection = new HubConnectionBuilder()
      .withUrl("/matchHub")
      .build();

    this.hubConnection.on("MatchUpdate", (match: Match) => {
      // some magic
    })

    this.hubConnection
      .start()
      .then(() => {
        this.hubConnection.invoke("SendUpdates");
      });



from SignalR Core - Error: Websocket closed with status code: 1006

Jointly training custom model with Tensorflow Object Detection API

I am trying to use Tensorflow object detection API models in another custom model I built (in the same codebase). Specifically, I am trying to figure out how below can be handled (mutually exclusive points):

  • jointly train Tensorflow object detection model Y with another Custom model X.
  • or just train custom model X while obtaining object detection Y predictions and incorporating it into X.
  • or object detection Y input is (a tensor) from another custom model X's intermediate layer (not a tfrecord or RGB image).

I have gone through the official tf object detection API docs and scoured the net trying to find good examples where tf object detection API was customized for cases beyond just object detection. I haven't found any. Any help or links would be appreciated.

p.s.: some relevant points

  1. FYI, I can run/train Tensorflow OD API independently
  2. Stackoverflow thrives on "show-me-what-you-did" culture, but as this question is preliminary and something that I haven't found an answer to in their documentation or on the web, hence shaking the community to find if someone has some thoughts on this.
  3. I had posted a similar question on datascience a few days back but no response.
  4. TF object detection API github new issues encourages posting to stackoverflow for help and support.


from Jointly training custom model with Tensorflow Object Detection API

Saturday 29 December 2018

Symfony Apache configuration when app_dev is in a subdirectory

Hey guys I am running a Symfony 3 app on my local wondows machine. It's served with a XAMPP server. The app runs fine when i visite my local URL. The problems is that I can't seem to be able to run it in the dev envirement.

The Symfony app folder structure is diffrent then what I am used to. The app_dev.php is located in the intranet folder.

enter image description here

This is the intranet folder

enter image description here

My current apache config file looks like this:

<VirtualHost *:443>
    ServerName quebecenreseau.ca
    ServerAlias www.quebecenreseau.ca

    SSLEngine on
    SSLCertificateFile "crt/quebecenreseau.ca/server.crt"
    SSLCertificateKeyFile "crt/quebecenreseau.ca/server.key"

    DocumentRoot C:\xampp\htdocs\quebecenreseau
    <Directory C:\xampp\htdocs\quebecenreseau>
        AllowOverride All
        Order Allow,Deny
        Allow from All
    </Directory>

    # uncomment the following lines if you install assets as symlinks
    # or run into problems when compiling LESS/Sass/CoffeeScript assets
    # <Directory /var/www/project>
    #     Options FollowSymlinks
    # </Directory>

    ErrorLog C:\xampp\apache\logs\project_error.log
    CustomLog C:\xampp\apache\logs\project_access.log combined
</VirtualHost>

I am unsure how to configure the VirtualHost block in order to get the dev envirement working on both, the root directory as well as the intranet directory that serves as the admin of the website.

If I change the DocumentRoot line like so:

DocumentRoot C:\xampp\htdocs\quebecenreseau\intranet\app_dev.php

My app CSS files are not loaded and my intranet is served at the root level. How can I configure my local envirement to load app_dev.php? Also, my prod envirement is on a centos server so I really don't wana mess with the prod envirement.

In case you wander whats inside the index.php file at the root level here it is:

<?php

/**
 * Start session
 */

session_start();

/**
 * Load configuration and router
 */

require 'classes/Router.php';
require 'classes/Database.php';
require 'classes/Main.php';

/**
 * Load helpers
 */

require 'helpers/phpmailer/class.phpmailer.php';
require 'helpers/GUMP.php';
require 'helpers/MailChimp.php';


/**
 * Load models
 */

require 'models/Article.php';
require 'models/Formation.php';
require 'models/SAE.php';


/**
 * Load routes
 */

require 'router.php';


/**
 * Match requests
 */

$match = $router->match();

if( $match && is_callable( $match['target'] ) ) {
    // call function related to matched route
    call_user_func_array( $match['target'], $match['params'] );
} else {
    // no route was matched
    header( $_SERVER["SERVER_PROTOCOL"] . ' 404 Not Found');
}



from Symfony Apache configuration when app_dev is in a subdirectory

How to populate data on click on jstree last node via ajax call

i want to make jstree as shown below

enter image description here

i want to get last node data with ajax call which is File 1 and File 2

Note: i'm hardcoding last node data below to simulate ajax call

Jsfiddle: https://jsfiddle.net/vym16okw/11/

 var s = [];
            s.push(
                { "id" : "ajson5", "parent" : "ajson2", "text" : "File 1", "date":"12", },
                { "id" : "ajson6", "parent" : "ajson2", "text" : "File 2", "date":"12" }
            );

here is what i have tried:

$('#using_json_2').jstree({ 'core' : {
    'data' : [
       { "id" : "ajson1", "parent" : "#", "text" : "Simple root node", "date":"2018"},
       { "id" : "ajson2", "parent" : "#", "text" : "Root node 2", "date":"2018"},
       { "id" : "ajson3", "parent" : "ajson2", "text" : "Child 1", "date":"12" },
       { "id" : "ajson4", "parent" : "ajson2", "text" : "Child 2", "date":"12" },
    ]
} });

   $('#using_json_2').on("select_node.jstree", function (e, data){
      console.log("node_id: " , data,'original',data.node.original);  
      var id = data.node.original.id;
      var date = data.node.original.date;
      $.ajax({
          url:'https://jsonplaceholder.typicode.com/users/'+id+'?date='+date,
          type:'GET',
          success:function(data){
            var s = [];
            s.push(
                { "id" : "ajson5", "parent" : "ajson2", "text" : "File 1", "date":"12","children": true },
                { "id" : "ajson6", "parent" : "ajson2", "text" : "File 2", "date":"12","children": true }
            );
          }
      });
        });
<link href="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/themes/default/style.min.css" rel="stylesheet"/>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/jstree.min.js"></script>


<div id="using_json_2"></div>

Please help me thanks in advance!!!!!!!!!



from How to populate data on click on jstree last node via ajax call

How to populate data on click on jstree last node via ajax call

i want to make jstree as shown below

enter image description here

i want to get last node data with ajax call which is File 1 and File 2

Note: i'm hardcoding last node data below to simulate ajax call

Jsfiddle: https://jsfiddle.net/vym16okw/11/

 var s = [];
            s.push(
                { "id" : "ajson5", "parent" : "ajson2", "text" : "File 1", "date":"12" },
                { "id" : "ajson6", "parent" : "ajson2", "text" : "File 2", "date":"12" }
            );

here is what i have tried:

$('#using_json_2').jstree({ 'core' : {
    'data' : [
       { "id" : "ajson1", "parent" : "#", "text" : "Simple root node", "date":"2018"},
       { "id" : "ajson2", "parent" : "#", "text" : "Root node 2", "date":"2018"},
       { "id" : "ajson3", "parent" : "ajson2", "text" : "Child 1", "date":"12" },
       { "id" : "ajson4", "parent" : "ajson2", "text" : "Child 2", "date":"12" },
    ]
} });

   $('#using_json_2').on("select_node.jstree", function (e, data){
      console.log("node_id: " , data,'original',data.node.original);  
      var id = data.node.original.id;
      var date = data.node.original.date;
      $.ajax({
          url:'https://jsonplaceholder.typicode.com/users/'+id+'?date='+date,
          type:'GET',
          success:function(data){
            var s = [];
            s.push(
                { "id" : "ajson5", "parent" : "ajson2", "text" : "File 1", "date":"12" },
                { "id" : "ajson6", "parent" : "ajson2", "text" : "File 2", "date":"12" }
            );
          }
      });
        });
<link href="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/themes/default/style.min.css" rel="stylesheet"/>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/jstree.min.js"></script>


<div id="using_json_2"></div>

Please help me thanks in advance!!!!!!!!!



from How to populate data on click on jstree last node via ajax call

How to populate data on click on jstree last node via ajax call

i want to make jstree as shown below

enter image description here

i want to get last node data with ajax call which is File 1 and File 2

Note: i'm hardcoding last node data below to simulate ajax call

Jsfiddle: https://jsfiddle.net/vym16okw/11/

 var s = [];
            s.push(
                { "id" : "ajson5", "parent" : "ajson2", "text" : "File 1", "date":"12" },
                { "id" : "ajson6", "parent" : "ajson2", "text" : "File 2", "date":"12" }
            );

here is what i have tried:

$('#using_json_2').jstree({ 'core' : {
    'data' : [
       { "id" : "ajson1", "parent" : "#", "text" : "Simple root node", "date":"2018"},
       { "id" : "ajson2", "parent" : "#", "text" : "Root node 2", "date":"2018"},
       { "id" : "ajson3", "parent" : "ajson2", "text" : "Child 1", "date":"12" },
       { "id" : "ajson4", "parent" : "ajson2", "text" : "Child 2", "date":"12" },
    ]
} });

   $('#using_json_2').on("select_node.jstree", function (e, data){
      console.log("node_id: " , data,'original',data.node.original);  
      var id = data.node.original.id;
      var date = data.node.original.date;
      $.ajax({
          url:'https://jsonplaceholder.typicode.com/users/'+id+'?date='+date,
          type:'GET',
          success:function(data){
            var s = [];
            s.push(
                { "id" : "ajson5", "parent" : "ajson2", "text" : "File 1", "date":"12" },
                { "id" : "ajson6", "parent" : "ajson2", "text" : "File 2", "date":"12" }
            );
          }
      });
        });
<link href="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/themes/default/style.min.css" rel="stylesheet"/>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jstree/3.2.1/jstree.min.js"></script>


<div id="using_json_2"></div>

Please help me thanks in advance!!!!!!!!!



from How to populate data on click on jstree last node via ajax call

Written unit test for testing rxjava, but not sure if my unit test is testing everything

Android Studio 3.4

I have the following method that I am testing. Basically, what this test does is makes a request and that will return a LoginResponseEntity that will be mapped and return a Single<LoginResponse>

 override fun loginUserPost(username: String, password: String, uniqueIdentifier: String, deviceToken: String, apiToken: String) : Single<LoginResponse> {
            val loginRequestEntity = LoginRequestEntity(username, password, uniqueIdentifier, deviceToken)
            return loginAPIService.loginUserPost(loginRequestEntity, apiToken)
                .map {
                    loginResponseDomainMapper.map(it)
                }
    }

The test case I have written works, but I think that this is not fully testing this method.

     @Test
     fun `should return LoginResponse`() {
        val loginRequestEntity = LoginRequestEntity("username", "password", "uniqueidentifier", "devicetoken")
        val loginResponse = LoginResponse("token", createUser(), emptyList(), emptyList())
        val loginResponseEntity = LoginResponseEntity("token", createUserEntity(), emptyList(), emptyList())

        whenever(loginAPIService.loginUserPost(loginRequestEntity, "apitoken")).thenReturn(Single.just(loginResponseEntity))

        loginServiceImp.loginUserPost("username", "password", "uniqueidentifier", "devicetoken", "apitoken")
            .test()
            .assertValue(loginResponse)

        verify(loginAPIService).loginUserPost(loginRequestEntity, "apitoken")
    }

        private fun createUser() =
            User(
                "id",
                "email",
                "firstname",
                "lastname",
                "phone",
                "address",
                "dob",
                "customer",
                listOf("enterpriseids"),
                listOf("vendorids"))

        private fun createUserEntity() =
            UserEntity(
                "id",
                "email",
                "firstname",
                "lastname",
                "phone",
                "address",
                "dob",
                "customer",
                listOf("enterpriseids"),
                listOf("vendorids"))
    }

Is there anything more I can do to test this method. Should I be testing the .map{loginResponseDomainMapper.map(it) part of this method?



from Written unit test for testing rxjava, but not sure if my unit test is testing everything

How do i add Hash to images and other assets in ionic

I am trying to fingerprint assets for cache busting in ionic so far I am able to fingerprint generated.js artifacts but unable to fingerPrint images and json files under assets. I took Help from Bundled files and cache busting,.

This is my webpack config.js

    /*
 * The webpack config exports an object that has a valid webpack configuration
 * For each environment name. By default, there are two Ionic environments:
 * "dev" and "prod". As such, the webpack.config.js exports a dictionary object
 * with "keys" for "dev" and "prod", where the value is a valid webpack configuration
 * For details on configuring webpack, see their documentation here
 * https://webpack.js.org/configuration/
 */

var path = require('path');
var webpack = require('webpack');
var ionicWebpackFactory = require(process.env.IONIC_WEBPACK_FACTORY);

var ModuleConcatPlugin = require('webpack/lib/optimize/ModuleConcatenationPlugin');
var PurifyPlugin = require('@angular-devkit/build-optimizer').PurifyPlugin;

var optimizedProdLoaders = [
  {
    test: /\.json$/,
    loader: 'json-loader'
  },
  {
    test: /\.js$/,
    loader: [
      {
        loader: process.env.IONIC_CACHE_LOADER
      },

      {
        loader: '@angular-devkit/build-optimizer/webpack-loader',
        options: {
          sourceMap: false
        }
      },
    ]
  },
  {
    test: /\.ts$/,
    loader: [
      {
        loader: process.env.IONIC_CACHE_LOADER
      },

      {
        loader: '@angular-devkit/build-optimizer/webpack-loader',
        options: {
          sourceMap: false
        }
      },

      {
        loader: process.env.IONIC_WEBPACK_LOADER
      }
    ]
  }
];

function getProdLoaders() {
  if (process.env.IONIC_OPTIMIZE_JS === 'true') {
    return optimizedProdLoaders;
  }
  return devConfig.module.loaders;
}

var devConfig = {
  entry: process.env.IONIC_APP_ENTRY_POINT,
  output: {
    path: '',
    publicPath: 'build/',
    filename: '[name].[chunkhash:4].js',
    devtoolModuleFilenameTemplate: ionicWebpackFactory.getSourceMapperFunction(),
  },

  devtool: process.env.IONIC_SOURCE_MAP_TYPE,

  resolve: {
    extensions: ['.ts', '.js', '.json'],
    modules: [path.resolve('node_modules')]
  },

  module: {
    loaders: [
      {
        test: /\.json$/,
        loader: 'json-loader'
      },
      {
        test: /\.ts$/,
        loader: process.env.IONIC_WEBPACK_LOADER
      },
      {
        test: /\.(jpg|jpeg|gif|png)$/,
        exclude: /node_modules/,
        loader:'url-loader?limit=1024&name=images/[name].[ext]'
    },
    {
        test: /\.(woff|woff2|eot|ttf|svg)$/,
        exclude: /node_modules/,
        loader: 'url-loader?limit=1024&name=fonts/[name].[ext]'
    }
    ]
  },

  plugins: [
    ionicWebpackFactory.getIonicEnvironmentPlugin(),
    ionicWebpackFactory.getCommonChunksPlugin()
  ],

  // Some libraries import Node modules but don't use them in the browser.
  // Tell Webpack to provide empty mocks for them so importing them works.
  node: {
    fs: 'empty',
    net: 'empty',
    tls: 'empty'
  }
};

var prodConfig = {
  entry: process.env.IONIC_APP_ENTRY_POINT,
  output: {
    path: '',
    publicPath: 'build/',
    filename: '[name].js',

    devtoolModuleFilenameTemplate: ionicWebpackFactory.getSourceMapperFunction(),
  },
  devtool: process.env.IONIC_SOURCE_MAP_TYPE,

  resolve: {
    extensions: ['.ts', '.js', '.json'],
    modules: [path.resolve('node_modules')]
  },

  module: {
    loaders: getProdLoaders()
  },

  plugins: [
    ionicWebpackFactory.getIonicEnvironmentPlugin(),
    ionicWebpackFactory.getCommonChunksPlugin(),
    new ModuleConcatPlugin(),
    new PurifyPlugin()
  ],

  // Some libraries import Node modules but don't use them in the browser.
  // Tell Webpack to provide empty mocks for them so importing them works.
  node: {
    fs: 'empty',
    net: 'empty',
    tls: 'empty'
  }
};


module.exports = {
  dev: devConfig,
  prod: prodConfig
}

Expected o/p: images with hash for cache busting.



from How do i add Hash to images and other assets in ionic

How do i add Hash to images and other assets in ionic

I am trying to fingerprint assets for cache busting in ionic so far I am able to fingerprint generated.js artifacts but unable to fingerPrint images and json files under assets. I took Help from Bundled files and cache busting,.

This is my webpack config.js

    /*
 * The webpack config exports an object that has a valid webpack configuration
 * For each environment name. By default, there are two Ionic environments:
 * "dev" and "prod". As such, the webpack.config.js exports a dictionary object
 * with "keys" for "dev" and "prod", where the value is a valid webpack configuration
 * For details on configuring webpack, see their documentation here
 * https://webpack.js.org/configuration/
 */

var path = require('path');
var webpack = require('webpack');
var ionicWebpackFactory = require(process.env.IONIC_WEBPACK_FACTORY);

var ModuleConcatPlugin = require('webpack/lib/optimize/ModuleConcatenationPlugin');
var PurifyPlugin = require('@angular-devkit/build-optimizer').PurifyPlugin;

var optimizedProdLoaders = [
  {
    test: /\.json$/,
    loader: 'json-loader'
  },
  {
    test: /\.js$/,
    loader: [
      {
        loader: process.env.IONIC_CACHE_LOADER
      },

      {
        loader: '@angular-devkit/build-optimizer/webpack-loader',
        options: {
          sourceMap: false
        }
      },
    ]
  },
  {
    test: /\.ts$/,
    loader: [
      {
        loader: process.env.IONIC_CACHE_LOADER
      },

      {
        loader: '@angular-devkit/build-optimizer/webpack-loader',
        options: {
          sourceMap: false
        }
      },

      {
        loader: process.env.IONIC_WEBPACK_LOADER
      }
    ]
  }
];

function getProdLoaders() {
  if (process.env.IONIC_OPTIMIZE_JS === 'true') {
    return optimizedProdLoaders;
  }
  return devConfig.module.loaders;
}

var devConfig = {
  entry: process.env.IONIC_APP_ENTRY_POINT,
  output: {
    path: '',
    publicPath: 'build/',
    filename: '[name].[chunkhash:4].js',
    devtoolModuleFilenameTemplate: ionicWebpackFactory.getSourceMapperFunction(),
  },

  devtool: process.env.IONIC_SOURCE_MAP_TYPE,

  resolve: {
    extensions: ['.ts', '.js', '.json'],
    modules: [path.resolve('node_modules')]
  },

  module: {
    loaders: [
      {
        test: /\.json$/,
        loader: 'json-loader'
      },
      {
        test: /\.ts$/,
        loader: process.env.IONIC_WEBPACK_LOADER
      },
      {
        test: /\.(jpg|jpeg|gif|png)$/,
        exclude: /node_modules/,
        loader:'url-loader?limit=1024&name=images/[name].[ext]'
    },
    {
        test: /\.(woff|woff2|eot|ttf|svg)$/,
        exclude: /node_modules/,
        loader: 'url-loader?limit=1024&name=fonts/[name].[ext]'
    }
    ]
  },

  plugins: [
    ionicWebpackFactory.getIonicEnvironmentPlugin(),
    ionicWebpackFactory.getCommonChunksPlugin()
  ],

  // Some libraries import Node modules but don't use them in the browser.
  // Tell Webpack to provide empty mocks for them so importing them works.
  node: {
    fs: 'empty',
    net: 'empty',
    tls: 'empty'
  }
};

var prodConfig = {
  entry: process.env.IONIC_APP_ENTRY_POINT,
  output: {
    path: '',
    publicPath: 'build/',
    filename: '[name].js',

    devtoolModuleFilenameTemplate: ionicWebpackFactory.getSourceMapperFunction(),
  },
  devtool: process.env.IONIC_SOURCE_MAP_TYPE,

  resolve: {
    extensions: ['.ts', '.js', '.json'],
    modules: [path.resolve('node_modules')]
  },

  module: {
    loaders: getProdLoaders()
  },

  plugins: [
    ionicWebpackFactory.getIonicEnvironmentPlugin(),
    ionicWebpackFactory.getCommonChunksPlugin(),
    new ModuleConcatPlugin(),
    new PurifyPlugin()
  ],

  // Some libraries import Node modules but don't use them in the browser.
  // Tell Webpack to provide empty mocks for them so importing them works.
  node: {
    fs: 'empty',
    net: 'empty',
    tls: 'empty'
  }
};


module.exports = {
  dev: devConfig,
  prod: prodConfig
}

Expected o/p: images with hash for cache busting.



from How do i add Hash to images and other assets in ionic

.css file, ::first-line not possible. how to achieve this? Ubuntu 18.04

Ubuntu 18.04

i am customizing the panel, this is the content in .css file
i have added ::first-line part to cusomize first line as shown in the below image. but it is not applied after reboot.

Content of .css file:

#panel .clock-display {
    color: blue;
    margin-left: 40px;
    margin-right: 40px;
}

#panel .clock-display::first-line {
    color: green; }

Content of .js file:

var DateMenuButton = new Lang.Class({
    Name: 'DateMenuButton',
    Extends: PanelMenu.Button,

    _init() {
        let item;
        let hbox;
        let vbox;

        let menuAlignment = 0.5;
        if (Clutter.get_default_text_direction() == Clutter.TextDirection.RTL)
            menuAlignment = 1.0 - menuAlignment;
        this.parent(menuAlignment);

        this._clockDisplay = new St.Label({ y_align: Clutter.ActorAlign.CENTER });
        this._indicator = new MessagesIndicator();

        let box = new St.BoxLayout();
        box.add_actor(new IndicatorPad(this._indicator.actor));
        box.add_actor(this._clockDisplay);
        box.add_actor(this._indicator.actor);

        this.actor.label_actor = this._clockDisplay;
        this.actor.add_actor(box);
        this.actor.add_style_class_name ('clock-display');

in this last line this.actor.add_style_calss_name ('clock-display'); i guess i have to specify its pseudo_calss or something but i dont have any idea.

in the below image if you see the day with time stamp, it is the default behavior when Ubuntu is freshly installed. enter image description here

by using Clock Override Extension, it is possible to make our own text.. like in this image..

enter image description here

here is a clue, this Clock Override Extension have special feature to make a next line by adding %n in its settings https://developer.gnome.org/glib/stable/glib-GDateTime.html#g-date-time-format

Clock Override Extension Details: https://extensions.gnome.org/extension/1206/clock-override/

Question: i am looking to configure both lines independently in .css file to choose the colors, heights, weights, shadows, borders etc.

is it achievable?



from .css file, ::first-line not possible. how to achieve this? Ubuntu 18.04

Is GET_ACCOUNTS permission still required for Google Drive REST API?

Google has deprecated Google Drive Android API.

We are migrating over to Google Drive REST API (v3).

2 years ago, we have experience in using Google Drive REST API (v2). We know that GET_ACCOUNTS permission is required, for GoogleAuthUtil.getToken() to work correctly - Google Drive API - the name must not be empty: null (But I had passed valid account name to GoogleAccountCredential)

When we look at example of Google Drive REST API (v3) - https://github.com/gsuitedevs/android-samples/blob/master/drive/deprecation/app/src/main/AndroidManifest.xml#L5 , we notice that Google team does mention explicitly

<!-- Permissions required by GoogleAuthUtil -->
<uses-permission android:name="android.permission.GET_ACCOUNTS" />
<uses-permission android:name="android.permission.MANAGE_ACCOUNTS" />

Surprisingly, when we run the example app (https://github.com/gsuitedevs/android-samples/tree/master/drive/deprecation), there's no run-time permission dialog being pop up for Android 6 and 8. Yet, the app can work without issue.

We expect the app will fail working, as no GET_ACCOUNTS permission was granted for the app. However, it can still auth and communicate with Google Drive service, without issue.

This is what I have tested so far

I have tested in Android 5, Android 6 and Android 8. No runtime GET_ACCOUNTS permission is granted for Android 6 and Android 8.

I'm also further test, by removing GET_ACCOUNTS and MANAGE_ACCOUNTS from Manifest completely. Still, both Android 5, Android 6 and Android 8 are workable. Before running, I have clear cache and clear storage of the app.


So, is GET_ACCOUNTS runtime permission request still required for Google Drive REST API to work?



from Is GET_ACCOUNTS permission still required for Google Drive REST API?

VueJS: Google Maps loads before data is ready - how to make it wait? (Nuxt)

This is my first VueJS project and I've got vue2-google-maps up and running but I've come across an issue when I attempt to connect the map markers to my site's JSON feed (using the Wordpress REST API), the Lat and Lng values are returning undefined or NaN.

On further investigation (thanks to @QuỳnhNguyễn below) it seems like the Google Maps instance is being run before the data is ready. I have tried watching for the feed to be loaded before initialising the map, but it doesn't seem to work.

The marker locations are pulled in from the WordPress REST API using JSON and exist in an array (locations). The array is present and populated in Vue Dev Tools (51 records), but when checking on mounted, the array is empty. The data is pulled in at the created stage, so I don't know why it wouldn't be ready by the mounted stage.

The code in question is as below...

Template:

<template>
    <gmap-map ref="map" :center="center" :zoom="zoom" :map-type-id="mapTypeId" :options="options">
        <gmap-marker 
            :key="index" v-for="(m, index) in locations" 
            :position="{ lat: parseFloat(m.place_latitude), lng: parseFloat(m.place_longitude) }" 
            @click="toggleInfoWindow(m,index)" 
            :icon="mapIconDestination">
        </gmap-marker>
        <gmap-info-window></gmap-info-window>
    </gmap-map>
</template>

Script

<script>
    const axios = require('axios');
    const feedURL = "API_REF";

    export default {
        props: {
            centerRef: {
                type: Object,
                default: function() {
                    return { lat: -20.646378400026226, lng: 116.80669825605469 }
                }
            },
            zoomVal: {
               type: Number,
               default: function() {
                   return 11
               }
            }
        },
        data: function() {
            return {
                feedLoaded: false,
                zoom: this.zoomVal,
                center: this.centerRef,
                options: {
                    mapTypeControl: false,
                    streetViewControl: false,
                },
                mapTypeId: 'styledMapType',
                mapIconDestination: '/images/map-pin_destination.png',
                mapIconActivity: '/images/map-pin_activity.png',
                mapIconAccommodation: '/images/map-pin_accommodation.png',
                mapIconEvent: '/images/map-pin_event.png',
                mapIconBusiness: '/images/map-pin_business.png',
                locations: [],
                markers: []
            }
        },
        created: function() {
            this.getData();
        },
        mounted: function() {
            this.$nextTick(() => {
                this.$refs.karrathaMap.$mapPromise.then((map) => {
                    var styledMapType = new google.maps.StyledMapType(
                        [...MAP_STYLE SETTINGS...]
                    )
                    map.mapTypes.set('styled_map', styledMapType);
                    map.setMapTypeId('styled_map');

                })

            });
        },
        watch: {
            feedLoaded: function() {
                if (this.feedLoaded == true) {
                    console.log(JSON.stringify(this.locations))
                }
            }
        },
        methods: {
            getData() {
                const url = feedURL;
                axios
                    .get(url)
                    .then((response) => {this.locations = response.data;})
                    .then(this.feedLoaded = true)
                    .catch( error => { console.log(error); }
                );
            }
        }
    }
</script>



from VueJS: Google Maps loads before data is ready - how to make it wait? (Nuxt)

Use of non-amd jquery plugins with requirejs without modifying them?

I recently searched a lot on use of non-amd jquery code with requirejs but couldn't find a proper way to do it.

To be more specific, I want to use pana-accordion.js found at below mentioned url.

https://www.jqueryscript.net/accordion/Horizontal-Accordion-Slider-Plugin-with-jQuery-Pana-Accordion.html

But the problem is that it is not amd aware and nor it exports anything out of it.I am currently doing it for magento-2 cms.So far I have created my custom.phtml and called it on homepage through admin area.Below is my custom.phtml

<div class="pana-accordion" id="accordion">
  <div class="pana-accordion-wrap">
    <div class="pana-accordion-item" style="background-color: #F44336"><img width="500" height="300" src="https://unsplash.it/500/300?image=57" /></div>
    <div class="pana-accordion-item" style="background-color: #2196F3"><img width="500" height="300" src="https://unsplash.it/500/300?image=49" /></div>
    <div class="pana-accordion-item" style="background-color: #4CAF50"><img width="500" height="300" src="https://unsplash.it/500/300?image=39" /></div>
    <div class="pana-accordion-item" style="background-color: #FF9800"><img width="500" height="300" src="https://unsplash.it/500/300?image=29" /></div>
  </div>
</div>


<script type="text/javascript">
    require(['jquery','panaaccordion'],function($, accordion) {
                accordion.init({
                    id: 'accordion',
                });
    })
</script>

And Here is configuration for pana-accordion.js javascript module in requirejs-config.js

var config = {
    'map': {
        '*': {
            'panaaccordion': 'js/pana-accordion'
        }
    },
    'shim': {
        'panaaccordion': {
            deps: ['jquery'],
            exports: 'accordion'
        }
    }
}

Below is some code lines for pana-acordion plugin

var accordion= {
    init: function(options){
        var that=this;
        options = $.extend(true,{
            expandWidth: 500,
            itemWidth: 100,
            extpand: 0,
            autoPlay: true,
            delay: 3000,
            animateTime: 400,
            borderWidth: 1,
            autoPlay: true,
            deviator: 30,
            bounce:"-50px"
        },options);
    .....

As you can see, it doesn't wrap code inside define() nor it exports or return anything.Rather accordion object is declared globally.

So far I have following questions (those marked as bold sorry for bad formatting but I am trying to improve it).

If I wrap the code inside define like below,

  define(['jquery'],function($){
            //pana-accordion plugin code
            });

Still, there is an error in console that says Uncaught TypeError: Cannot read property 'init' of undefined, even though I created exports entry in shim configuration. But error resolves when I finally write return statement after accordion object.

return accordion;

What is the purpose of using shim, if we have to manually write return statement from plugin for object for example?

Second, Do I have to write whole path for shim configuration? If I map alias panaaccordion for file located at 'js/pana-accordion', still I have to use 'js/pana-accordion' for shim configuration otherwise there are some loading order issues.

Third- Can I use such non-amd plugins with requirejs without modifying a single line from them?? If yes, How?



from Use of non-amd jquery plugins with requirejs without modifying them?

PyCharm PEP8 Code Style highlights not working

I've been having this PEP8 style highlighting issue. The issue is it's not highlighting obvious style issues, like no blank lines before class definitions, or no empty lines at the end of the file. It could have to do with my VM and vagrant, but the project code is hosted locally so I don't think that should be an issue.

If I do Code > Run Inspection By Name > PEP 8 coding style violation it says it finds no instances.

Under File > Settings > Editor > Code Style > Python > Blank Lines I have blank lines set around the class. An oddity is that if I change the number of lines "around method", it changes them in real time in the example text on the right, but it doesn't do the same for lines "around class".

Under File > Settings > Editor > Inspections > Python I have "PEP 8 coding style violation" selected. I've tried changing it from warning to error and I still can't see the highlights in my file.

I don't have power saver mode on, which I've learned is a way to deactivate the background style checking in the editor.

I searched in Help > Show Log in Files for PEP8 and found "Pep8ExternalAnnotator - Found no suitable interpreter", but I don't know what that means and I couldn't find any references to it online.

I'm running PyCharms professional 2016.3

PyCharm 2016.3.2
Build #PY-163.10154.50, built on December 28, 2016
Licensed to arbaerbearfaerfa
Subscription is active until October 17, 2017
For educational use only.
JRE: 1.8.0_112-release-408-b6 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o



from PyCharm PEP8 Code Style highlights not working

How to implement freehand image cropping in android?

How can i implement freehand cropping on Imageview.

Using below code i'm able to draw freehand path and can able to crop image but i'm facing some other problems

Now what i have tried so far

Here is my code

code for cropping image using canvas

public class SomeView extends View implements View.OnTouchListener {
    private Paint paint;

    int DIST = 2;
    boolean flgPathDraw = true;

    Point mfirstpoint = null;
    boolean bfirstpoint = false;

    Point mlastpoint = null;

    Bitmap bitmap;

    Context mContext;

    public SomeView(Context c, Bitmap bitmap) {
        super(c);

        mContext = c;
        this.bitmap = bitmap;

        setFocusable(true);
        setFocusableInTouchMode(true);

        paint = new Paint(Paint.ANTI_ALIAS_FLAG);
        paint.setStyle(Paint.Style.STROKE);
        paint.setPathEffect(new DashPathEffect(new float[]{10, 20}, 0));
        paint.setStrokeWidth(5);
        paint.setColor(Color.RED);
        paint.setStrokeJoin(Paint.Join.ROUND);
        paint.setStrokeCap(Paint.Cap.ROUND);

        this.setOnTouchListener(this);
        points = new ArrayList<Point>();

        bfirstpoint = false;
    }

    public SomeView(Context context, AttributeSet attrs) {
        super(context, attrs);

        mContext = context;
        setFocusable(true);
        setFocusableInTouchMode(true);

        paint = new Paint(Paint.ANTI_ALIAS_FLAG);
        paint.setStyle(Paint.Style.STROKE);
        paint.setStrokeWidth(5);
        paint.setColor(Color.RED);

        points = new ArrayList<Point>();
        bfirstpoint = false;

        this.setOnTouchListener(this);
    }

    public void onDraw(Canvas canvas) {

        /*Rect dest = new Rect(0, 0, getWidth(), getHeight());

        paint.setFilterBitmap(true);
        canvas.drawBitmap(bitmap, null, dest, paint);*/

        canvas.drawBitmap(bitmap, 0, 0, null);

        Path path = new Path();
        boolean first = true;

        for (int i = 0; i < points.size(); i += 2) {
            Point point = points.get(i);
            if (first) {
                first = false;
                path.moveTo(point.x, point.y);
            } else if (i < points.size() - 1) {
                Point next = points.get(i + 1);
                path.quadTo(point.x, point.y, next.x, next.y);
            } else {
                mlastpoint = points.get(i);
                path.lineTo(point.x, point.y);
            }
        }
        canvas.drawPath(path, paint);
    }

    public boolean onTouch(View view, MotionEvent event) {
        // if(event.getAction() != MotionEvent.ACTION_DOWN)
        // return super.onTouchEvent(event);

        Point point = new Point();
        point.x = (int) event.getX();
        point.y = (int) event.getY();

        if (flgPathDraw) {

            if (bfirstpoint) {

                if (comparepoint(mfirstpoint, point)) {
                    // points.add(point);
                    points.add(mfirstpoint);
                    flgPathDraw = false;
                    showcropdialog();
                } else {
                    points.add(point);
                }
            } else {
                points.add(point);
            }

            if (!(bfirstpoint)) {

                mfirstpoint = point;
                bfirstpoint = true;
            }
        }

        invalidate();
        Log.e("Hi  ==>", "Size: " + point.x + " " + point.y);

        if (event.getAction() == MotionEvent.ACTION_UP) {
            Log.d("Action up*****~~>>>>", "called");
            mlastpoint = point;
            if (flgPathDraw) {
                if (points.size() > 12) {
                    if (!comparepoint(mfirstpoint, mlastpoint)) {
                        flgPathDraw = false;
                        points.add(mfirstpoint);
                        showcropdialog();
                    }
                }
            }
        }

        return true;
    }

    private boolean comparepoint(Point first, Point current) {
        int left_range_x = (int) (current.x - 3);
        int left_range_y = (int) (current.y - 3);

        int right_range_x = (int) (current.x + 3);
        int right_range_y = (int) (current.y + 3);

        if ((left_range_x < first.x && first.x < right_range_x)
                && (left_range_y < first.y && first.y < right_range_y)) {
            if (points.size() < 10) {
                return false;
            } else {
                return true;
            }
        } else {
            return false;
        }

    }

    public void fillinPartofPath() {
        Point point = new Point();
        point.x = points.get(0).x;
        point.y = points.get(0).y;

        points.add(point);
        invalidate();
    }

    public void resetView() {
        points.clear();
        paint.setColor(Color.WHITE);
        paint.setStyle(Paint.Style.STROKE);

        paint = new Paint(Paint.ANTI_ALIAS_FLAG);
        paint.setStyle(Paint.Style.STROKE);
        paint.setStrokeWidth(5);
        paint.setColor(Color.RED);

        points = new ArrayList<Point>();
        bfirstpoint = false;

        flgPathDraw = true;
        invalidate();
    }

    private void showcropdialog() {
        DialogInterface.OnClickListener dialogClickListener = new DialogInterface.OnClickListener() {
            @Override
            public void onClick(DialogInterface dialog, int which) {
                Intent intent;
                switch (which) {
                    case DialogInterface.BUTTON_POSITIVE:
                        cropImage();
                        break;

                    case DialogInterface.BUTTON_NEGATIVE:
                        /*// No button clicked

                        intent = new Intent(mContext, DisplayCropActivity.class);
                        intent.putExtra("crop", false);
                        mContext.startActivity(intent);

                        bfirstpoint = false;*/
                        resetView();

                        break;
                }
            }
        };

        AlertDialog.Builder builder = new AlertDialog.Builder(mContext);
        builder.setMessage("Do you Want to save Crop or Non-crop image?")
                .setPositiveButton("Crop", dialogClickListener)
                .setNegativeButton("Non-crop", dialogClickListener).show()
                .setCancelable(false);
    }
}

Code for cropping bitmap

public void cropImage() {

    setContentView(R.layout.activity_picture_preview);

    imageView = findViewById(R.id.image);

    int widthOfscreen = 0;
    int heightOfScreen = 0;

    DisplayMetrics dm = new DisplayMetrics();
    try {
        getWindowManager().getDefaultDisplay().getMetrics(dm);
    } catch (Exception ex) {
    }
    widthOfscreen = dm.widthPixels;
    heightOfScreen = dm.heightPixels;

    Bitmap bitmap2 = mBitmap;

    Bitmap resultingImage = Bitmap.createBitmap(widthOfscreen,
            heightOfScreen, bitmap2.getConfig());

    Canvas canvas = new Canvas(resultingImage);

    Paint paint = new Paint();

    Path path = new Path();

    for (int i = 0; i < points.size(); i++) {

        path.lineTo(points.get(i).x, points.get(i).y);

    }

    canvas.drawPath(path, paint);

    paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.SRC_IN));

    canvas.drawBitmap(bitmap2, 0, 0, paint);

    imageView.setImageBitmap(resultingImage);

}

Here what i get result using above code

Cropping image using Finger touch

This image showing result after cropping image

This is my expected output

Please check below screenshot for the same

This Image showing cropping image using Finger touch

This image showing result after cropping image

The Below problems i'am facing in above code

  • Unable to set bitmap in full screen using canvas
  • If i set bitmap in full screen in canvas than image is stretching
  • How to set transparent background to cropped bitmap
  • Unable to add border to cropped image
  • The result of image Cropping is not as expected

Here are some other post that i have tried so far

none of the above post help to achieve my excepted output

If need more information please do let me know. Thanks in advance. Your efforts will be appreciated.



from How to implement freehand image cropping in android?