Friday, 1 July 2022

How to run custom openedx project in localhost

I have the edx-platform, ecommerce, ecommerce-themes, credentials and edx-theme directories. I have installed successfully tutor and devstack but I didn't find the way to replace these custom directories. So, what is the correct way to replace them ?

After devstack runned successfully, I tried replacing the default directories with the custom ones but when I runned make dev.provision I get this output

+ docker-compose exec -T lms bash -e -c 'source /edx/app/edxapp/edxapp_env && cd /edx/app/edxapp/edx-platform && NO_PYTHON_UNINSTALL=1 paver install_prereqs'
/edx/app/edxapp/edxapp_env: line 13: manpath: command not found
---> pavelib.prereqs.install_prereqs
---> pavelib.prereqs.install_node_prereqs
npm install error detected. Retrying...


Captured Task Output:
---------------------

---> pavelib.prereqs.install_prereqs
---> pavelib.prereqs.install_node_prereqs
Traceback (most recent call last):
  File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/paver/tasks.py", line 201, in _run_task
    return do_task()
  File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/paver/tasks.py", line 198, in do_task
    return func(**kw)
  File "/edx/app/edxapp/edx-platform/pavelib/utils/timer.py", line 40, in timed
    return wrapped(*args, **kwargs)
  File "/edx/app/edxapp/edx-platform/pavelib/prereqs.py", line 332, in install_prereqs
    install_node_prereqs()
  File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/paver/tasks.py", line 333, in __call__
    retval = environment._run_task(self.name, self.needs, self.func)
  File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/paver/tasks.py", line 219, in _run_task
    return do_task()
  File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/paver/tasks.py", line 198, in do_task
    return func(**kw)
  File "/edx/app/edxapp/edx-platform/pavelib/utils/timer.py", line 40, in timed
    return wrapped(*args, **kwargs)
  File "/edx/app/edxapp/edx-platform/pavelib/prereqs.py", line 184, in install_node_prereqs
    prereq_cache("Node prereqs", ["package.json"], node_prereqs_installation)
  File "/edx/app/edxapp/edx-platform/pavelib/prereqs.py", line 111, in prereq_cache
    install_func()
  File "/edx/app/edxapp/edx-platform/pavelib/prereqs.py", line 154, in node_prereqs_installation
    raise Exception("npm install failed: See {}".format(npm_log_file_path))
Exception: npm install failed: See /edx/app/edxapp/edx-platform/test_root/log/npm-install.log

make[1]: *** [Makefile:217: impl-dev.provision] Error 1
make[1]: Leaving directory '/home/pablo/Documents/prueba/devstack'
Would you like to assist devstack development by sending anonymous usage metrics to edX? Run `make metrics-opt-in` to learn more!
make: *** [Makefile:221: dev.provision] Error 2

EDIT

The directories that I have after run make dev.provision and make dev.up with the default project of devstack, are the following ones:

Directories of default project devstack openedx

The thing that I tried was replace the default directories with the custom directories (open-edx-platform, ecommerce, ..., etc).



from How to run custom openedx project in localhost

How to export trained stable-baselines/TensorFlow neural network to MATLAB?

I'm trying to export a PPO2-trained neural network to MATLAB. It was saved as a zip file using

model.save(os.path.join(save_dir, 'best_overall_model'))

I can load my model with

model = PPO2.load(os.path.join(load_dir), env=env, tensorboard_log=save_dir)

Because I could not find a direct way to do the exporting to MATLAB, I thought of using Open Neural Network Exchange (ONNX) as an intermediate format. I could not find info on how to do this conversion from Stable Baselines, so I resaved my model with TensorFlow using simple_save. Note: I'm using TensorFlow 1.14.

tf.saved_model.simple_save(model.sess, os.path.join(save_dir, 'tensorflow_model'), inputs={"obs": model.act_model.obs_ph}, outputs={"action": model.action_ph})

Finally, I use the following command to obtain the ONNX file:

python -m tf2onnx.convert --saved-model tensorflow_model --output model.onnx

I used netron to visualise the resulting ONNX file. Clearly, something went wrong:
ONNX

Alternative suggestions to get my neural network into MATLAB are also appreciated.



from How to export trained stable-baselines/TensorFlow neural network to MATLAB?

Abbreviation similarity between strings

I have a use case in my project where I need to compare a key-string with a lot many strings for similarity. If this value is greater than a certain threshold, I consider those strings "similar" to my key and based on that list, I do some further calculations / processing.

I have been exploring fuzzy matching string similarity stuff, which use edit distance based algorithms like "levenshtein, jaro and jaro-winkler" similarities.

Although they work fine, I want to have a higher similarity score if one string is "abbreviation" of another. Is there any algorithm/ implementation I can use for this.

Note:

language: python3 
packages explored: fuzzywuzzy, jaro-winkler

Example:

using jaro_winkler similarity:

>>> jaro.jaro_winkler_metric("wtw", "willis tower watson")
0.7473684210526316
>>> jaro.jaro_winkler_metric("wtw", "willistowerwatson")
0.7529411764705883

using levenshtein similarity:

>>> fuzz.ratio("wtw", "willis tower watson")
27
>>> fuzz.ratio("wtw", "willistowerwatson")
30
>>> fuzz.partial_ratio("wtw", "willistowerwatson")
67
>>> fuzz.QRatio("wtw", "willistowerwatson")
30

In these kind of cases, I want score to be higher (>90%) if possible. I'm ok with few false positives as well, as they won't cause too much issue with my further calculations. But if we match s1 and s2 such that s1 is fully contained in s2 (or vice versa), their similarity score should be much higher.

Edit: Further Examples for my Use-Case

For me, spaces are redundant. That means, wtw is considered abbreviation for "willistowerwatson" and "willis tower watson" alike.

Also, stove is a valid abbreviation for "STack OVErflow" or "STandardOVErview"

A simple algo would be to start with 1st char of smaller string and see if it is present in the larger one. Then check for 2nd char and so on until the condition satisfies that 1st string is fully contained in 2nd string. This is a 100% match for me.

Further examples like wtwx to "willistowerwatson" could give a score of, say 80% (this can be based on some edit distance logic). Even if I can find a package which gives either True or False for abbreviation similarity would also be helpful.



from Abbreviation similarity between strings