Thursday 31 August 2023

Unexpected rate limit on Twitter API V2 (POST Create a Tweet)

I am fairly new to the Twitter API and I've got the following issue.

I try to send a tweet with the 'Twitter v2 API / Manage tweets / POST Create a Tweet' endpoint. Unfortunately this results in an error '429', every time I call this endpoint (manually)

{
    "title": "Too Many Requests",
    "detail": "Too Many Requests",
    "type": "about:blank",
    "status": 429
}

The returning header shows nothing strange (from my inexperienced point of view) , see screenshot below

Headers of 429 response

Some general remarks:

  • I've got just 1 application in my Twitter developer account.
  • Within this application there is just one App.
  • I am the only one who generates the requests manually.
  • The execution is manual and all within the boundaries of the API (200 per 15 min)

I started with the 'out the box' create_tweet.py solution (see link below), due to my misunderstanding I made several authorisation request with the same twitter user, (following the link, entering the PIN) maybe that triggered some type of blockage or rejection? I regenerated all my tokens and started all over via Postman (as mentioned in the Quickstart, see link below) The error 429 is still there (both with the python script as via Postman)

Anyone got an idea?

Links:



from Unexpected rate limit on Twitter API V2 (POST Create a Tweet)

Why javascript/jquery code does not work on n android devices?

On laravel 8.83 / jquery 3.6.0 app when user clicked on action button and ajax response returned new value is set (groupedNewsUserActionsCount of response) into div with id = "#total_actions_" + newsId + "_" + action + "_label" and to assign class 'news-user-reactions-item-voted' into block

I do it with a code :

$.ajax({
    type: "POST",
    dataType: "json",
    url: "/news/react",
    data: {"_token": "", "newsId": newsId, "action": action},
    success: function (response) {

        $("#total_actions_" + newsId + "_" + action + "_label").html(response.groupedNewsUserActionsCount === 0 ? '' : response.groupedNewsUserActionsCount)

        // Make _cancel action button visible and hide add action button
        $("#a_" + newsId + "_" + action + "_add_action").css('display', 'none')

        $("#a_" + newsId + "_" + action + "_cancel_action").css('display', 'block')

        // User voted this action : show icon image in light blue color
        $("#div_" + newsId + "_" + action + "_add_action_block").attr('class', 'news-user-reactions-item-voted')
    },
    error: function (error) {
        console.log(error)
    }
});

That works ok on common browsers, but does not work properly(new value is not set and class is not changed) on android devices. Do I use some invalid methods which are not valid for android devices ? Which methods have I to use ?

EDIT :

I make test with Samsung Galaxy A 50 and when my page is opened in chrome browser I found option/checkbox "Version for comp" - if to turn it on the the page is reopeened at first - and after that all functionality with new value applied and class changed new class changed WORKS properly! What option is it and how I use it in my issue ?

EDIT # 2:

I have reinstalled google-chrome-stable and now I have in my system:

 dpkg -s     google-chrome-stable
Package: google-chrome-stable
Status: install ok installed
Priority: optional
Section: web
Installed-Size: 320121
Maintainer: Chrome Linux Team <chromium-dev@chromium.org>
Architecture: amd64
Version: 116.0.5845.140-1
Provides: www-browser
Depends: ca-certificates, fonts-liberation, libasound2 (>= 1.0.17), libatk-bridge2.0-0 (>= 2.5.3), libatk1.0-0 (>= 2.2.0), libatspi2.0-0 (>= 2.9.90), libc6 (>= 2.17), libcairo2 (>= 1.6.0), libcups2 (>= 1.6.0), libcurl3-gnutls | libcurl3-nss | libcurl4 | libcurl3, libdbus-1-3 (>= 1.9.14), libdrm2 (>= 2.4.75), libexpat1 (>= 2.0.1), libgbm1 (>= 17.1.0~rc2), libglib2.0-0 (>= 2.39.4), libgtk-3-0 (>= 3.9.10) | libgtk-4-1, libnspr4 (>= 2:4.9-2~), libnss3 (>= 2:3.35), libpango-1.0-0 (>= 1.14.0), libu2f-udev, libvulkan1, libx11-6 (>= 2:1.4.99.1), libxcb1 (>= 1.9.2), libxcomposite1 (>= 1:0.4.4-1), libxdamage1 (>= 1:1.1), libxext6, libxfixes3, libxkbcommon0 (>= 0.5.0), libxrandr2, wget, xdg-utils (>= 1.0.2)
Pre-Depends: dpkg (>= 1.14.0)
Description: The web browser from Google
 Google Chrome is a browser that combines a minimal design with sophisticated technology to make the web faster, safer, and easier.

But anyway In chrome://inspect/#devices I got message :

⚠ Remote browser is newer than client browser. Try `inspect fallback` if inspection fails.

Sure I have started new session in chrome and restarted my OS. Are there some cache I need to clear or restart some service?

What is “inspect fallback” in mentioned in error message ?

On my Samsung Galaxy A 50 I see version 116.05845.114 They have very close version numbers “116.05845”... What is wrong ?



from Why javascript/jquery code does not work on n android devices?

How to build bundled scripts from a monolithic Node TypeScript repository?

I have a monolithic Node TypeScript project. Within this project, there's a scripts folder containing multiple files. Each file is intended to act as a separate service/script and these files share common libraries, methods, and classes.

For example, I have a HttpRequest class that is used across multiple files and internally utilizes the axios library.

Here's my challenge: I want to maintain the monolithic structure during development to minimize code redundancy. However, when I compile the project to a "/dist" folder, I'd like each file in the scripts folder to become a standalone, bundled JavaScript file that can be executed independently. Essentially, each built file should contain everything it needs to run on its own.

I've tried using Webpack, but it seems either too complex or not suited for this specific requirement. Is there a simpler or better way to achieve this?

Example folder structure:

> scripts/
  -------->test1.ts
  -------->test2.ts
  -------->test3.ts
> shared/
  -------->shared1.ts
  -------->shared2.ts
  -------->httpRequestAgnostic.ts

Thank you.



from How to build bundled scripts from a monolithic Node TypeScript repository?

How to customizing the native navigator.share() function?

I'm trying to modify the url parameter of the navigator.share() function so when some share the page using the mobile browser's share option, I'm able to customize the URL.

Why I need this? I have WordPress website www.abcwebsite.com that has a dynamic cloning system with subdomain with a single backend site (not multisite network, we use a plugin to manage identify the subdomain and modify certain texts). Ex: clonea.abcwebsite.com is an exact copy of the main site except for some text elements. For SEO purposes, we want to disable the clone sites from being indexed but want to capitalize on the traffic that the clone sites get.

The initial step I did was change the canonical meta tag to point to the root domain. Later I identified that navigator.share() by default uses the canonical link and if not found then falls back to location.href. When someone shared the clone page using browser's share option, it shares the main site link instead of the cloned link. Not a feasible solution.

Next step was to completely remove the meta tag for canonical and move it to http headers. It solved our problems with Google. But recently I have noticed that the canonical in the header doesn't work in Bing and so all the thousands of our clone sites are now appearing in Bing results.

I believe the only way to go about it add the canonical meta tag and point it to main website, and when navigator.share() is initiated by browser's share option, pass the clone site url into the share.

This is what I have tried and I get Maximum call stack size exceeded error--

var defaultNavigator = navigator;
function shareThis($args){ return defaultNavigator.share($args); }
navigator.share = function($args){ return shareThis($args); };
navigator.share({ url: "https://www.abcwebsite.com", text: "Test", title: document.title });

and with this I get Failed to execute 'share' on 'Navigator': Illegal invocation

var defaultShare = navigator.share;
function shareThis($args){ return defaultShare($args); }
navigator.share = function($args){ return shareThis($args); };
navigator.share({ url: "https://www.abcwebsite.com", text: "Test", title: document.title });


from How to customizing the native navigator.share() function?

Wednesday 30 August 2023

Prefetching related objects for templates

I have two models, Signup and Report, related via a third model, Training:

class Signup(models.Model):
    pilot = models.ForeignKey(
        settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="signups"
    )
    training = models.ForeignKey(
        Training, on_delete=models.CASCADE, related_name="signups"
    )

class Report(models.Model):
    training = models.OneToOneField(
        "trainings.Training", on_delete=models.CASCADE, primary_key=True
    )

And e.g. in the update view for a Report I try to prefetch the Signups and their Pilots and sort them into the context:

class ReportUpdateView(generic.UpdateView):
    model = Report

    def get_object(self):
        training = get_object_or_404(Training, date=self.kwargs["date"])
        report = get_object_or_404(
            Report.objects.select_related("training")
            .prefetch_related("training__signups__pilot")
            training=training,
        )
        return report

    def get_context_data(self, **kwargs):
        context = super().get_context_data(**kwargs)
        ...
        runs_by_signup = {}
        for signup in self.object.training.selected_signups:
            runs_by_signup[signup] = [...]
        context["runs_by_signup"] = runs_by_signup
        return context

and in the corresponding template I call signup.pilot like so:


However, each signup.pilot results in a call to the database to figure out the corresponding pilot. Is there a way to avoid these extra calls or do I have to store all information in get_context_data?



from Prefetching related objects for templates

Ghidra Python script to print codeunits with symbols

I'm using Ghidra to disassemble and study a 68000 binary. I want to write a Python script to get a pretty print version of the disassembly (Save as menu won't be sufficient here).

I thought about simply iterating through codeunits, printing labels if any, then the codeunit. But I get things like :

move.w #-0x5d56,(0xffffa602).w
bsr.b 0x000002c2

while in Ghidra Listing window, it was :

move.w #0xA2AA ,(ptr_to_last_updatable_bg_area).w
bsr.b  set_reg_values_2e2

How can I, at least, recover symbols from addresses (ptr_to_last_updatable_bg_area and set_reg_values_2e2), and, at best, formatted values (unsigned 0xA2AA rather than signed -0x5d56) ?



from Ghidra Python script to print codeunits with symbols

File Download and Streaming Process with Next.js and CDN Integration

I am attempting to create a download system for large files on my website. I am using Next.js, and my large files are hosted on a CDN. What I need is to download several files from my CDN, create a zip file, and send this archive file to the client. I already have this working with the following structure:

/api/download.ts:

export const getS3Object = async (bucketParams: any) => {
  try {
    const response = await s3Client.send(new GetObjectCommand(bucketParams))
    return response
  } catch (err) {
    console.log('Error', err)
  }
}

async function downloadRoute(req: NextApiRequest, res: NextApiResponse) {
  if (req.method === 'POST') {
    res.setHeader('Content-Disposition', 'attachment; filename=pixel_capture_HDR.zip')
    res.setHeader('Content-Type', 'application/zip')

    const fileKeys = req.body.files
    const archive = archiver('zip', { zlib: { level: 9 } })

    archive.on('error', (err: Error) => {
      console.error('Error creating ZIP file:', err)
      res.status(500).json({ error: 'Error creating ZIP file' })
    })

    archive.pipe(res)

    try {
      for (const fileKey of (fileKeys as string[])) {
        const params = { Bucket: 'three-assets', Key: fileKey }
        const s3Response = await getS3Object(params) as Buffer
        archive.append(s3Response.Body, { name: fileKey.substring(fileKey.lastIndexOf('/') + 1) })
      }

      archive.finalize()
    } catch (error) {
      console.error('Error retrieving files from S3:', error)
      res.status(500).json({ error: 'Error retrieving files from S3' })
    }
  } else {
    res.status(405).json({ error: `Method '${req.method}' Not Allowed` })
  }
}

export const config = { api: { responseLimit: false } }

export default withIronSessionApiRoute(downloadRoute, sessionOptions)

my download function in pages/download.tsx:

  const handleDownload = async() => {
    setLoading(true)

    const files = [files_to_download]

    const response = await fetch('/api/download', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ files: files })
    })

    if (response.ok) {
      const blob = await response.blob()
      const url = URL.createObjectURL(blob)

      const link = document.createElement('a')
      link.href = url
      link.download = 'pixel_capture_HDR.zip'
      link.click()

      URL.revokeObjectURL(url)
    }

    setLoading(false)
  }

With the above structure, I display a loading UI on the client when they start fetching the files. Once everything is ready and zipped by the server, the zip file is downloaded almost instantly to the user’s download folder.

The problem with this structure is that if the user closes the browser tab, the download would stop and fail. What I hope to achieve is to stream/pipe the download and the zipping directly into the download queue of the browser, if that makes sense.

I attempted to use Readable Streams on the client side, which strangely seems to work because when I log the push function, I can see the values being logged and the “done” variable changing from false to true when everything is downloaded. However, the actual browser download of the zip file is still triggered when everything is ready, instead of “streaming” the download as soon as the user clicks the download button.

  if (response.ok) {
    const contentDispositionHeader = response.headers.get('Content-Disposition')
    
    const stream = response.body
    const reader = stream.getReader()

    const readableStream = new ReadableStream({
      start(controller) {
        async function push() {
          const { done, value } = await reader.read()
          if (done) {
            controller.close()
            return
          }
          controller.enqueue(value)
          push()
        }
        push()
      }
    })

    const blob = new Blob([readableStream], { type: 'application/octet-stream' })
    const url = URL.createObjectURL(blob)
  }

I realize it’s quite challenging to explain what I want, and I hope it’s understandable and achievable.

I would greatly appreciate any tips on how to achieve such a solution.

Thanks in advance for the help!



from File Download and Streaming Process with Next.js and CDN Integration

Bootstrap 5 - Navigation in carousel is not working

I am using bootstrap 5 and want to implement an image carousel and a modal.

Here is my example code.

<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-4bw+/aepP/YC94hEpVNVgiZdgIC5+VKNBQNGCHeKRQN+PtmoHDEXuppvnDJzQIu9" crossorigin="anonymous">
<link href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.10.5/font/bootstrap-icons.css" rel="stylesheet">

<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.1/dist/js/bootstrap.bundle.min.js" integrity="sha384-HwwvtgBNo3bZJJLYd8oVXjrBZt8cqVSpeBNS5n7C8IVInixGAoxmnlMuBnhbgrkm" crossorigin="anonymous"></script>

<div class="row">
  <div class="col-md-6">
    <div id="carouselExampleControls" class="carousel slide" data-bs-ride="carousel">
      <div class="carousel-inner">

        <div class="carousel-item">
          <img src="https://via.placeholder.com/640x480.png/0088ff?text=animals+ratione" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item">
          <img src="https://via.placeholder.com/640x480.png/009999?text=animals+illum" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item active">
          <img src="https://via.placeholder.com/640x480.png/007766?text=animals+dignissimos" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item ">
          <img src="https://via.placeholder.com/640x480.png/0077aa?text=animals+ad" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item ">
          <img src="https://via.placeholder.com/640x480.png/0000aa?text=animals+totam" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item ">
          <img src="https://via.placeholder.com/640x480.png/000022?text=animals+et" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item ">
          <img src="https://via.placeholder.com/640x480.png/0066ee?text=animals+sed" class="d-block w-100" alt="ab natus nemo">
        </div>
        <div class="carousel-item ">
          <img src="https://via.placeholder.com/640x480.png/00dd33?text=animals+neque" class="d-block w-100" alt="ab natus nemo">
        </div>
      </div>
      <a class="carousel-control-prev" href="#carouselExampleControls" role="button" data-slide="prev">
        <span class="carousel-control-prev-icon" aria-hidden="true"></span>
        <span class="sr-only">Previous</span>
      </a>
      <a class="carousel-control-next" href="#carouselExampleControls" role="button" data-slide="next">
        <span class="carousel-control-next-icon" aria-hidden="true"></span>
        <span class="sr-only">Next</span>
      </a>
    </div>
  </div>
  <div class="col-md-6">
    <h3>ab natus nemo</h3>
    <p>Architecto corrupti nulla dolorum sint. Rerum consequatur quidem et autem nobis qui. Hic fugiat voluptate dignissimos sed officia in odio.</p>
    <h4>$5.99</h4>
    <button type="button" class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#buyModal">
                    Buy
                </button>
  </div>
</div>

<div class="modal fade" id="buyModal" tabindex="-1" role="dialog" aria-labelledby="buyModalLabel" aria-hidden="true">
  <div class="modal-dialog" role="document">
    <div class="modal-content">
      <div class="modal-header">
        <h5 class="modal-title" id="buyModalLabel">Buy: ab natus nemo</h5>
        <button type="button" class="close" data-dismiss="modal" aria-label="Close">
                        <span aria-hidden="true">&times;</span>
                    </button>
      </div>
      <div class="modal-body">
        <p>You are about to purchase for $5.99.</p>
        <form method="POST" action="">
          <input type="hidden" name="_token" value="m46pmcbo0G2yuMVNyfwpcY07AomhmCY5AD2OixFy"> <button type="submit" class="btn btn-primary">Confirm</button>
        </form>
      </div>
    </div>
  </div>
</div>

In the image carousel the two navigation buttons (next & previous) do not work.

enter image description here

In my modal window the close button is not rendered properly.

enter image description here

I tried out the standard examples from bootstrap, but they are not working with my code.

Any suggestions what I am doing wrong?



from Bootstrap 5 - Navigation in carousel is not working

Tuesday 29 August 2023

Problem scraping Amazon using requests: I get blocked even when using cookie and headers. I can only scrape using a browser. Any solution?

The requests module isn't working anymore for me when trying to scrape amazon, I've tried using cookies, headers, changing IP's but nothing really works other than scraping through a browser. Does anyone know how they're able to do it and if there's a good work around using requests?

The real odd thing is that the request when sent through cURL returns the page, but if I turn it into python code it returns a captcha request that I can't see in my browser and doesn't go away even with cookies.

For example this cURL request returns the Amazon main page, but when truend into python it returns a captcha request:

curl -L -vvv http://amazon.com -H "User-Agent:Mozilla 5.0"

This is my current code, I copied the curl request directly from the browser and turned into python code, still not working:

import requests

cookies = {
    'session-id': '135-4585428-6195300',
    'session-id-time': '2082787201l',
    'i18n-prefs': 'USD',
    'sp-cdn': '"L5Z9:IL"',
    'ubid-main': '132-1503580-7678418',
    'session-token': 'R5XVE3t8VeX8bRwnjuxXwONDgBnxkngfLfzobFxK5HL+8QaofrVEPjv8Mvta3D6EMlaiFeOyhjjiHkHLjjRwlh9seQ0wsfXE0BU0csh2Wtx6q6r630bsx5VvbBIQcyVAPRkgvL5wgU12P39t5iCZ7b3ykFjRvb9qe7eScZC/F9DJ+NuFMOVP+Z7OQtlZNQzcYrKmWTJH0HJZho8VtJBish0ATwfLhVI+Ihu1ioHYUfSUNDdjQFgG7SyiKZDufkXekZZGaF3x24vY9haBeJVnE9GjmMN+XHySuQtP/stlZmhlp9JOH17+JTZHVsCn/SEONdK5QhETXzoaQ+9YvptxA+v49bgXJn+L',
    'csm-hit': 'tb:NBK78382HSSRXD9W22YX+s-SKXXAE4EMPQ2XYNGK1G0|1692968547644&t:1692968547644&adb:adblk_no',
}

headers = {
    'authority': 'www.amazon.com',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
    'accept-language': 'en-US,en;q=0.9',
    'cache-control': 'max-age=0',
    # 'cookie': 'session-id=135-4585428-6195300; session-id-time=2082787201l; i18n-prefs=USD; sp-cdn="L5Z9:IL"; ubid-main=132-1503580-7678418; session-token=R5XVE3t8VeX8bRwnjuxXwONDgBnxkngfLfzobFxK5HL+8QaofrVEPjv8Mvta3D6EMlaiFeOyhjjiHkHLjjRwlh9seQ0wsfXE0BU0csh2Wtx6q6r630bsx5VvbBIQcyVAPRkgvL5wgU12P39t5iCZ7b3ykFjRvb9qe7eScZC/F9DJ+NuFMOVP+Z7OQtlZNQzcYrKmWTJH0HJZho8VtJBish0ATwfLhVI+Ihu1ioHYUfSUNDdjQFgG7SyiKZDufkXekZZGaF3x24vY9haBeJVnE9GjmMN+XHySuQtP/stlZmhlp9JOH17+JTZHVsCn/SEONdK5QhETXzoaQ+9YvptxA+v49bgXJn+L; csm-hit=tb:NBK78382HSSRXD9W22YX+s-SKXXAE4EMPQ2XYNGK1G0|1692968547644&t:1692968547644&adb:adblk_no',
    'device-memory': '8',
    'downlink': '10',
    'dpr': '1',
    'ect': '4g',
    'rtt': '100',
    'sec-ch-device-memory': '8',
    'sec-ch-dpr': '1',
    'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Windows"',
    'sec-ch-ua-platform-version': '"10.0.0"',
    'sec-ch-viewport-width': '1037',
    'sec-fetch-dest': 'document',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-site': 'none',
    'sec-fetch-user': '?1',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.54',
    'viewport-width': '1037',
}

response = requests.get('https://www.amazon.com/dp/B002G9UDYG', cookies=cookies, headers=headers)


from Problem scraping Amazon using requests: I get blocked even when using cookie and headers. I can only scrape using a browser. Any solution?

Capturing data written to a Node.js socket

The 'data' event from net.Socket can be used to read data received from a socket, for example:

const net = require('net');

const server = net.createServer((socket) => {
  socket.on('data', (buffer)=>{
    console.log(`Server received ${buffer}`)
    socket.write('World');
  });
});
server.listen(8080, '127.0.0.1');

const socket = new net.Socket();
socket.on('data', (buffer) => {
  console.log(`Client received ${buffer}`)
});
socket.connect(8080, '127.0.0.1');

setTimeout(()=>{
  socket.write('Hello');
},100);
setTimeout(()=>{
  socket.write('Hi');
},200);

setTimeout(()=>{
  console.log('Goodbye.');
  socket.destroy();
  server.close();
},300);

Prints:

Server received Hello
Client received World
Server received Hi
Client received World
Goodbye.

Is there a way read data sent out from a socket? For example if server was a remote server, is there a way to capture the data sent to the server from the socket using events or any another method?



from Capturing data written to a Node.js socket

Chrome Source view interpretting JS block comments incorrectly

Chrome is currently displaying the bulk of the code in our JS files as commented out in the source view. This first section of code is managed, we don't have direct control of it. It's also fairly static, and hasn't changed in a long time. Our application code itself follows this managed code, and as a result is displayed commented out.

That said, as of a day or two ago, I started seeing this issue when I try to debug our application.

Chrome Source view

As you can see, the block comments are not terminating with the */ as expected. However while RUNNING the code, it is properly executed, as indicated by the paused execution on line 4.

I can add a /* */ at the top of a file and the code in that file will appear in source view correctly, and I can debug.

The bottom line is I can no longer debug efficiently.

My co-workers are not experiencing this issue, even on the same server, which makes it even weirder. I'm running the most up to date Chrome (Version 116.0.5845.111 (Official Build) (64-bit)) and have disabled all extensions.

For comparison, here's the same code in FF:

enter image description here



from Chrome Source view interpretting JS block comments incorrectly

Monday 28 August 2023

Add dataframe rows based on external condition

I have this data frame:

print(df)

    Env location lob      grid row server        model        make          slot     
    Prod USA     Market   AB3 bc2  Server123     Hitachi        dcs           1        
    Prod USA     Market   AB3 bc2  Server123     Hitachi        dcs           2        
    Prod USA     Market   AB3 bc2  Server123     Hitachi        dcs           3        
    Prod USA     Market   AB3 bc2  Server123     Hitachi.       dcs           4        
    Dev  EMEA    Ins.     AB6 bc4  Serverabc     IBM            abc           3        
    Dev  EMEA    Ins.     AB6 bc4  Serverabc     IBM            abc           4        
    Dev  EMEA    Ins.     AB6 bc4  Serverabc     IBM            abc           5        
    Dev  EMEA    Ins.     AB6 bc4  Serverabc     IBM            abc           6
    UAT  PAC     Retail   AB6 bc4  Serverzzz     Cisco          ust           3        
    UAT  PAC     Retail   BB6 bc4  Serverzzz     Cisco          ust           4        
    UAT  PAC     Retail   BB6 bc4  Serverzzz     Cisco          ust           5        
    UAT  PAC     Retail   BB6 bc4  Serverzzz     Cisco          ust           6

In this example:

  • If model is IBM, slot starts from slot=3 and there must be 8 slots, i.e. go from 3 to 10. In this case, only slots 3 to 6 are present.
    • Therefore, I need to add 4 more rows (slot 7, 8, 9, 10).
  • If model is Cisco, row count for cisco needs to be 6. Only slots 3 to 6 are present.
    • Therefore, I need to add 2 more rows

New rows:

  • must repeat the last row for the model, while incrementing the slot number
  • Their "grid" cell must indicate "available".

This needs to be programatically where given the model, I need to know the total number of slots and if the number of slot is short, I need to create new rows.

The final data frame needs to be like this:

    Env location lob    grid  row server        model        make               slot     
    Prod USA     Market AB3       bc2  Server123     Hitachi        dcs           1        
    Prod USA     Market AB3       bc2  Server123     Hitachi        dcs           2        
    Prod USA     Market AB3       bc2  Server123     Hitachi        dcs           3        
    Prod USA     Market AB3       bc2  Server123     Hitachi.       dcs           4        
    Dev  EMEA    Ins.   AB6       bc4  Serverabc     IBM            abc           3        
    Dev  EMEA    Ins.   AB6       bc4  Serverabc     IBM            abc           4        
    Dev  EMEA    Ins.   AB6       bc4  Serverabc     IBM            abc           5        
    Dev  EMEA    Ins.   AB6       bc4  Serverabc     IBM            abc           6
    Dev  EMEA    Ins.   available bc4  Serverabc     IBM            abc           7
    Dev  EMEA    Ins.   available bc4  Serverabc     IBM            abc           8
    Dev  EMEA    Ins.   available bc4  Serverabc     IBM            abc           9
    Dev  EMEA    Ins.   available bc4  Serverabc     IBM            abc           10
    UAT  PAC     Retail   AB6     bc4  Serverzzz     Cisco          ust           3        
    UAT  PAC     Retail   BB6     bc4  Serverzzz     Cisco          ust           4        
    UAT  PAC     Retail   BB6     bc4  Serverzzz     Cisco          ust           5        
    UAT  PAC     Retail   BB6     bc4  Serverzzz     Cisco          ust           6
    UAT  PAC     Retail  available bc4  Serverzzz     Cisco          ust          7
    UAT  PAC     Retail  available bc4  Serverzzz     Cisco          ust          8

I tried something like this:

def slots(row):
   if 'IBM' in row['model']:
      number_row=8
   if 'Cisco' in row['model']:
      number_row=6

I am not familiar too much with pandas, not even sure something like this could possible.



from Add dataframe rows based on external condition

opencv auto georeferencing scanned map

I have the following sample image:

enter image description here

where I am trying to locate the coords of the four corners of the inner scanned map image like:

enter image description here

i have tried with something like this:

import cv2
import numpy as np

# Load the image
image = cv2.imread('sample.jpg')

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Apply Gaussian blur
blurred = cv2.GaussianBlur(gray, (5, 5), 0)

# Perform edge detection
edges = cv2.Canny(blurred, 50, 150)

# Find contours in the edge-detected image
contours, _ = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Iterate through the contours and filter for squares or rectangles for contour in contours:
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.04 * perimeter, True)

if len(approx) == 4:
    x, y, w, h = cv2.boundingRect(approx)
    aspect_ratio = float(w) / h
    
    # Adjust this threshold as needed
    if aspect_ratio >= 0.9 and aspect_ratio <= 1.1:
        cv2.drawContours(image, [approx], 0, (0, 255, 0), 2)

# Display the image with detected squares/rectangles
cv2.imwrite('detectedy.png', image)

but all i get is something like this:

enter image description here

UPDATE:

so i have found this code that should do what i require:

import cv2
import numpy as np
from subprocess import call

file = "samplebad.jpg"
img = cv2.imread(file)
orig = img.copy()

# sharpen the image (weighted subtract gaussian blur from original)
'''
https://stackoverflow.com/questions/4993082/how-to-sharpen-an-image-in-opencv
larger smoothing kernel = more smoothing
'''
blur = cv2.GaussianBlur(img, (9,9), 0)
sharp = cv2.addWeighted(img, 1.5, blur, -0.5, 0)

# convert the image to grayscale
#       gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.cvtColor(sharp, cv2.COLOR_BGR2GRAY)

# smooth  whilst keeping edges sharp
'''
(11) Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.
(17, 17) Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look "cartoonish".
These values give the best results based upon the sample images
'''
gray = cv2.bilateralFilter(gray, 11, 17, 17)

# detect edges
'''
(100, 200) Any edges with intensity gradient more than maxVal are sure to be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to "sure-edge" pixels, they are considered to be part of edges.
'''
edged = cv2.Canny(gray, 100, 200, apertureSize=3, L2gradient=True)
cv2.imwrite('./edges.jpg', edged)

# dilate edges to make them more prominent
kernel = np.ones((3,3),np.uint8)
edged = cv2.dilate(edged, kernel, iterations=1)
cv2.imwrite('edges2.jpg', edged)

# find contours in the edged image, keep only the largest ones, and initialize our screen contour
cnts, hierarchy = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]
screenCnt = None

# loop over our contours
for c in cnts:

    # approximate the contour
    peri = cv2.arcLength(c, True)
    approx = cv2.approxPolyDP(c, 0.02 * peri, True)

    # if our approximated contour has four points, then we can assume that we have found our screen
    if len(approx) > 0:
        screenCnt = approx
        print(screenCnt)
        cv2.drawContours(img, [screenCnt], -1, (0, 255, 0), 10)
        cv2.imwrite('contours.jpg', img)
        break

# reshaping contour and initialise output rectangle in top-left, top-right, bottom-right and bottom-left order
pts = screenCnt.reshape(4, 2)
rect = np.zeros((4, 2), dtype = "float32")

# the top-left point has the smallest sum whereas the bottom-right has the largest sum
s = pts.sum(axis = 1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]

# the top-right will have the minumum difference and the bottom-left will have the maximum difference
diff = np.diff(pts, axis = 1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]

# compute the width and height  of our new image
(tl, tr, br, bl) = rect
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))

# take the maximum of the width and height values to reach our final dimensions
maxWidth = max(int(widthA), int(widthB))
maxHeight = max(int(heightA), int(heightB))

# construct our destination points which will be used to map the screen to a top-down, "birds eye" view
dst = np.array([
    [0, 0],
    [maxWidth - 1, 0],
    [maxWidth - 1, maxHeight - 1],
    [0, maxHeight - 1]], dtype = "float32")

# calculate the perspective transform matrix and warp the perspective to grab the screen
M = cv2.getPerspectiveTransform(rect, dst)
warp = cv2.warpPerspective(orig, M, (maxWidth, maxHeight))

#       cv2.imwrite('./cvCropped/frame/' + file, warp)

# crop border off (85px is empirical)
#       cropBuffer = 85     # this is for the old (phone) images
cropBuffer = 105    # this is for those taken by Nick
height, width = warp.shape[:2]
cropped = warp[cropBuffer:height-cropBuffer, cropBuffer:width-cropBuffer]

# output the result
cv2.imwrite('cropped.jpg', cropped)

but because these old scanned maps have a fold in them it fails and only detects one side like:

enter image description here

is there a way to somehow get opencv to ignore the center region?



from opencv auto georeferencing scanned map

Pandas and pandasgui show leads to ImportError

According to the pandasgui docs I tried to run the following code:

import pandas as pd
from pandasgui import show
df = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6], 'c':[7,8,9]})
show(df)

However, I get an error:

ImportError: cannot import name '_c_internal_utils' from partially initialized module 'matplotlib' (most likely due to a circular import) (c:\Users\Shayan\ypy1\lib\site-packages\matplotlib\__init__.py)

According to this post it seems related to the fact that pands and pandasgui import statements both use the same reference to a matploblib function/module, so there is a circular import. I tried to change the code to

import pandas as pd
import pandasgui
df = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6], 'c':[7,8,9]})
pandasgui.show(df)

But I get the same error. How can I fix this?

My python is version 3.11.4 | packaged by Anaconda, Inc.

My pandas is version 2.03 it was already installed in my anaconda environment by default and I updated it with the Anaconda navigator.

My pandasgui is version 0.2.14 and I installed it with pip install pandasgui. I updated the index in Anaconda navigator and it also displays version 0.2.14



from Pandas and pandasgui show leads to ImportError

Make a plotly R with interactive colors chosen by the user

I have some Shiny apps that are running. However, some contain graphics that may be of public interest. Thus, I am looking for a solution similar to the image below, from Infogram, where it is possible to edit the colors of the graph in a web environment for later download.

enter image description here

Is it possible to do something similar in Shiny R? Being Plotly, maybe add a little plot configuration button on the toolbar? If Plotly is not possible, do you know of a similar package that allows this?

The "top" would be:

enter image description here

A plotly R code example

library(plotly)
fig <- plot_ly(midwest, x = ~percollege, color = ~state, type = "box")
fig


from Make a plotly R with interactive colors chosen by the user

Scrapy - crawling archives of website plus all subdirectories

So I'm trying to scrape data from archived versions of a website using Scrapy. Here is my code:

import scrapy
from scrapy.crawler import *
from scrapy.item import *
from scrapy.linkextractors import *
from scrapy.loader import *
from scrapy.spiders import *
from scrapy.utils.log import *
from scrapy.utils.project import *

try:
    from urllib.parse import urlparse
except ImportError:
    from urlparse import urlparse
    
class VSItem(Item):
    value = Field()

class vsSpider(scrapy.Spider):
    name = "lever"
    start_urls = [
        "https://web.archive.org/web/20051120125133/http://www.novi.k12.mi.us/default.aspx"
    ]
    rules = (
            Rule(
                LinkExtractor(allow="https:\/\/web.archive.org\/web\/\d{14}\/http:\/\/www.novi.k12.mi.us\/.*"),
                callback="parse"
                ),
            )

    def parse(self, response):
        for elem in response.xpath("/html"):
            it = VSItem()
            it["value"] = elem.css("input[name='__VIEWSTATE']").extract()
            yield it
 
process = CrawlerProcess(get_project_settings())

process.crawl(vsSpider)
process.start() # the script will block here until the crawling is finished

I set the start_urls to https://web.archive.org/web/20051120125133/http://www.novi.k12.mi.us/since that is the earliest archived version of the page.

This script extracts the element I want from the page listed, but then stops here.

My question is: How can I automatically crawl every single archive of both the homepage (/default.aspx) and every sub-directory of the main site (e.g. not just /default.aspx but also, for example, /Schools/noviHigh/default.aspx and everything else)? (Basically loop through every possible URL that matches /https:\/\/web.archive.org\/web\/.\d{14}/http:\/\/www.novi.k12.mi.us\/.*/g, the \d{14} is because the date stamp is in the form YYYYMMDDHHmmSS)



from Scrapy - crawling archives of website plus all subdirectories

Sunday 27 August 2023

Navigate to new page in React without losing data on current page

I have done some digging on this and I have also queried chat-gpt (lol) but I cannot seem to find any resources that tackle the problem. What I am trying to do is simple: on page A, I load a list of 100 items from my server; I can click into each item which would enable me navigate to a new detailed page for that item. When I click the back button, I do not want to have to load all 100 items again, lose my scroll position etc etc.

I know I can maintain state for my scroll position and just scrollTo() in my useEffect, but it gets a little trickier for the actual data because I don't want to have to store data for my list of 100 items (could be a lot more than a 100 in production).

Is there a way to neatly do this in react? I currently navigate using the useNavigate() hook.

EDIT-- I need to state something that makes this a real problem. Page A is paginated in batches of 10. There is a 'more' button I click which loads the next 10 items from the server. So let's say I click into element number 50. When i press the back button, I would like to come back to that element. The useEffect will make it so that it's only the first 10 elements which are loaded again



from Navigate to new page in React without losing data on current page

Warning: Text content did not match. Server: "562" Client: "563" when creating a timer in Nextjs

I want to implement a competition clock using Nextjs:

  1. competitionStartTime represents the competition start time

  2. While the user enters the page, I save Math.round(Date.now() / 1000) - competitionStartTime (how long does the competition ran)

  3. Using addOneSecond and setInterval keep adding one to competitionStartTime to run the timer every second.

  4. It also has a pause function for using to pause and resume the timer

My full demo code:

"use client";

import { useState, useEffect, useRef } from "react";

function calculateModelResult(duration) {
  // simulate the model calculation time, it costs 0.3s
  for (var i = 0; i < 1000000000; i++) {
    duration + 0;
  }
  return duration * 0.1;
}

export default function Page() {
  const competitionStartTime = 1692956171;
  const [competitionRunningTime, setCompetitionRunningTime] = useState(
    Math.round(Date.now() / 1000) - competitionStartTime
  );
  const [modelResult, setModelResult] = useState(0);
  const modelIntervalRef = useRef(0);
  const competitionTimeIntervalRef = useRef(0);
  const [pause, setPause] = useState(false);
  const addOneSecond = () => setCompetitionRunningTime((prev) => prev + 1);

  useEffect(() => {
    modelIntervalRef.current = window.setInterval(() => {
      setModelResult(calculateModelResult(competitionRunningTime));
    }, 1000);

    competitionTimeIntervalRef.current = window.setInterval(() => {
      addOneSecond();
      console.log(competitionRunningTime);
    }, 1000);

    return () => {
      window.clearInterval(modelIntervalRef.current);
      window.clearInterval(competitionTimeIntervalRef.current);
    };
  }, [competitionRunningTime]);

  const pasueFunction = () => {
    if (!pause) {
      clearInterval(competitionTimeIntervalRef.current);
    } else {
      competitionTimeIntervalRef.current = setInterval(addOneSecond, 1000);
    }
    setPause((prev) => !prev);
  };
  return (
    <>
      <h1>
        Hello, the time of the match is {competitionRunningTime}, the model result is {modelResult}.
      </h1>
      <button onClick={pasueFunction}>{pause ? "Run" : "Pause"}</button>
    </>
  );
}

However, I am two errors:

Error: Text content does not match server-rendered HTML.

Warning: Text content did not match. Server: "562" Client: "563"

See more info here: https://nextjs.org/docs/messages/react-hydration-error
Error: There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.

And the modelResult appears to has delay:

Hello, the time of the match is 2904, the model result is 290.3.
Hello, the time of the match is 2938, the model result is 293.7.

It needs to be calculated using the competitionRunningTime and it costs 0.3s to finish the calculation, will it affect the time update every second?

I have watched warning text content did not match, it didn't help with my current code. May I ask how can I solve it?



from Warning: Text content did not match. Server: "562" Client: "563" when creating a timer in Nextjs

ASP.NET_SessionID fetched from set-cookie response header using python script does not work, while ASP.NET_SessionID fetched from browser does

Here is the goal, I am trying to get ASP.NET_SessionID using python request and then want to use it for further requests. I have been able to get the ASP.NET_SessionID from one of the requests that is setting it within the set-cookie in response headers, however when I make a subsequent request with that ASP.NET_SessionID, it does not work correctly the response is 200 but the response data is not accurate, the response data is the default data that returns for the expired session.

Here is the code for getting the ASP.NET_SessionID, I am using session as it automatically takes care of the cookies:

################### get asp session from set-cookie in response headers
url = "https://finder.humana.com/finder/v1/pfp/get-language-selectors"

response=session.get(url, timeout=20)

print(response, url)
jsonResponseHeaders = response.headers
aspSession = jsonResponseHeaders['Set-Cookie'].split(';')[0]
cookies=session.cookies.get_dict()
print(response.headers)
print('ASP.NET_SessionID:',aspSession)
print(session.cookies.get_dict())
cookieString = "; ".join([str(x)+"="+str(y) for x,y in cookies.items()])
print('Cookie String:',cookieString)

Here is the subsequent request:

################# provider plan/network request
url = "https://finder.humana.com/finder/v1/pfp/get-networks-by-provider"        

payload = {"providerId":311778,"customerId":1,"coverageType":3}     

response = session.post(url, json=payload)

print(response,url)
print(response.headers)
print(session.cookies.get_dict())
cookies=session.cookies.get_dict()
cookieString = "; ".join([str(x)+"="+str(y) for x,y in cookies.items()])
print('Cookie String:',cookieString)
print(response.text)
print()

The part I do not understand is that the ASP.NET_SessionID I get from the browser works fine within postman or python requests when I send it within cookie in headers, however which I get from python requests does not work.



from ASP.NET_SessionID fetched from set-cookie response header using python script does not work, while ASP.NET_SessionID fetched from browser does

Office.js: ContentControl in table broken after inserting row

I'm using Microsoft® Word for Microsoft 365 MSO (Version 2307 Build 16.0.16626.20170) 64-bit. Running add-inn sample "Office Add-in Task Pane project" with JavaScript from Microsoft Tutorial: "Create a Word task pane add-in" - page. enter image description here

Here is my "RUN" button on-click handler:

export async function run() {
  return Word.run(async (context) => {
    /**
     * Create Table
     */
    const data = [
      ["Tokyo", "Beijing", "Seattle"],
      ["Apple", "Orange", "Pineapple"]
    ];
    const table = context.document.body.insertTable(2, 3, "Start", data);
    table.styleBuiltIn = Word.BuiltInStyleName.gridTable5Dark_Accent2;
    table.styleFirstColumn = false;

    /**
     * Selecting first row and inserting ContentControl
     */
    table.rows.getFirst().select("Select");
    let range = context.document.getSelection();
    range.insertContentControl();
    /**
     * At this point ContentControl covers only first row
     */

    /**
     * Inserting new row to the end
     */
    const firstTable = context.document.body.tables.getFirst();
    firstTable.addRows("End", 1, [["New", "Row", "Here"]]);

    /**
     * At this point ContentControl spread for all rows :(    
    */

   await context.sync();
});

In the code above only the first row is inside content control. But after adding a new row firstTable.addRows("End", 1, [["New", "Row", "Here"]]) all rows become inside content control. How to fix it? enter image description here



from Office.js: ContentControl in table broken after inserting row

Cannot import a TypeScript library that was installed from a GitHub fork

I'm trying to use the probot library to build a GitHub app. However, as per this issue, probot does not support ESM modules, and I need ESM modules in order to have my app function properly.

Fortunately, this fork adds ESM support to the library. However, after installing the library by running the following line:

npm i github:pixelass/probot#feat/esm-it-plz

which installs without issue, I'm unable to import the library. When I add the line:

import { Probot } from "probot"

I get the following error:

Cannot find module 'probot' or its corresponding type declarations.ts(2307)

I even made my own fork of the fork and, as suggested by How to have npm install a typescript dependency from a GitHub url?, added the following to the module's package.json:

"postinstall": "tsc --outDir ./lib"

But even when I install my own fork, I get a different error:

Module '"probot"' has no exported member 'Probot'.ts(2305)

Does anyone know how to solve this issue?



from Cannot import a TypeScript library that was installed from a GitHub fork

Extending pydantic v2 model in Odoo

Odoo 16, Pydantic v2, extendable-pydantic 1.1.0

Use case:

  • Main module with pydantic model MainModel
  • One (or more) add-on modules which are dependant on Main module and extend MainModel with new fields
  • When only main module is active, the MainModel should have only field_a and field_b
  • When Addon module A(...) is installed, the MainModel should have an additional field field_c (...)

Simplified dummy implemenation :

Main module , main.py

from extendable_pydantic import ExtendableModelMeta
from pydantic import BaseModel
from extendable import context, registry

class MainModel(BaseModel, metaclass=ExtendableModelMeta):
    field_a: str
    field_b: int
    
_registry = registry.ExtendableClassesRegistry()
context.extendable_registry.set(_registry)
_registry.init_registry()

... fastApi endpoints that utilize MainModel below ...

Addon module A, extended_main.py

from odoo.addons.main_module.modules.main import MainModel

class ExtendedMainModel(MainModel, extends=MainModel):
    field_c: int

The result is that ExtendedMainModel is ignored and MainModel has only field_a and field_b



from Extending pydantic v2 model in Odoo

Saturday 26 August 2023

What does OR-Tools' CP-SAT favor the first variable?

I have a function that solves the Stigler Diet problem using the CP-SAT solver (instead of the linear solver) and minimizes error (instead of minimizing the quantity of food).

Initially, I was trying to use AddLinearConstraint to specify the minimum and maximum allowable amount of each nutrient, but the solver returned that this problem was "Infeasible". So, I just used the minimum bound and printed which nutrient was out of bounds.

For some reason, the solver chooses a solution whose first variable is far larger than the other variables, no matter what variable (nutrient) is first. For example, the allowable amount of Calcium is between 1,300 mg and 3,000 mg, but it choses a solution of 365,136 mg. If I change the first variable to Carbohydrates, then the new solution's Carbohydrate value is similarly out of proportion, while the other variables (including Calcium) are within the allowable bounds.

Why does the solver favor the first variable? If I can understand this, then I think I should be able to figure out how to get all variables within the bounds.

Below is the essential part of my program. Full working code is here: https://github.com/TravisDart/nutritionally-complete-foods

# "nutritional_requirements" and "foods" are passed in from CSV files after some preprocessing.
def solve_it(nutritional_requirements, foods):
    model = cp_model.CpModel()

    quantity_of_food = [
        model.NewIntVar(0, MAX_NUMBER * NUMBER_SCALE, food[2]) for food in foods
    ]
    error_for_quantity = [
        model.NewIntVar(0, MAX_NUMBER * NUMBER_SCALE, f"Abs {food[2]}") for food in foods
    ]

    for i, nutrient in enumerate(nutritional_requirements):
        model.Add(
            sum([food[i + FOOD_OFFSET][0] * quantity_of_food[j] for j, food in enumerate(foods)]) > nutrient[1][0]
        )
        model.AddAbsEquality(
            target=error_for_quantity[i],
            expr=sum([food[i + FOOD_OFFSET][0] * quantity_of_food[j] for j, food in enumerate(foods)]) - nutrient[1][0],
        )

    model.Minimize(sum(error_for_quantity))

    solver = cp_model.CpSolver()
    # The solution printer displays the nutrient that is out of bounds.
    solution_printer = VarArraySolutionPrinter(quantity_of_food, nutritional_requirements, foods)

    status = solver.Solve(model, solution_printer)
    outcomes = [
        "UNKNOWN",
        "MODEL_INVALID",
        "FEASIBLE",
        "INFEASIBLE",
        "OPTIMAL",
    ]
    print(outcomes[status])


from What does OR-Tools' CP-SAT favor the first variable?

JavaScript Drag and Drop - Dragging Copies

I am attempting to create a robust drag-and-drop event scheduler. I have almost all of it complete, but am stuck on one element: how to drag a copy of an item.

You can see the current version running here: jsfiddle

What I would like to be able to happen is this:

  1. When a LeadIn event is dragged from the Events column to a Screen column, the original LeadIn event should stay in the Events column.
  2. When a LeadIn event is dragged from a Screen column to another Screen column, the event should be moved to the new column.
  3. When a LeadIn event is dragged from a Screen column to the Events column, the event should be removed from the Screen column and not appear in the Events column.
  4. There should always be only one LeadIn event in the Events column.

Any help would be greatly appreciated.

Also, here is my current code:

HTML:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Sortable Event Scheduler</title>
    <link rel="preconnect" href="https://fonts.googleapis.com">
    <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
    <link href="https://fonts.googleapis.com/css2?family=Open+Sans:ital,wght@0,300;0,400;0,500;0,600;0,700;0,800;1,300;1,400;1,500;1,600;1,700;1,800&display=swap" rel="stylesheet">
    <link rel="stylesheet" href="css/styles.css">
</head>
<body>
    <div class="container">
        <div class="column" id="events">
            <div class="column-header">Events</div>
            <ul class="sortable-list">
                <li class="event" data-duration="00:15:00">
                    <div class="event-title">Welcome</div>
                    <div class="event-start-time"></div>
                    <div class="event-duration">Duration: 00:15:00</div>
                    <div class="event-end-time"></div>
                </li>
                <li class="event lead-in-event leadin-event" id="leadIn" data-duration="00:01:00" data-event-type="">
                    <div class="event-title">LeadIn</div>
                    <div class="event-start-time" style="display: none;"></div>
                    <div class="event-duration" style="display: none;">Duration: 00:01:00</div>
                    <div class="event-end-time" style="display: none;"></div>
                </li>
                <li class="event" data-duration="00:05:10">
                    <div class="event-title">Film 1</div>
                    <div class="event-start-time"></div>
                    <div class="event-duration">Duration: 00:05:10</div>
                    <div class="event-end-time"></div>
                </li>
                <li class="event" data-duration="00:09:40">
                    <div class="event-title">Film 2</div>
                    <div class="event-start-time"></div>
                    <div class="event-duration">Duration: 00:09:40</div>
                    <div class="event-end-time"></div>
                </li>
            </ul>
        </div>
        <div class="column" id="friday-screen-1" data-start-time="18:00:00">
            <div class="column-header">Friday - Screen 1</div>
            <ul class="sortable-list">
                <!-- Add events here -->
            </ul>
            <div class="column-total-duration"></div>
        </div>
        <div class="column" id="saturday-screen-1" data-start-time="08:30:00">
            <div class="column-header">Saturday - Screen 1</div>
            <ul class="sortable-list">
                <!-- Add events here -->
            </ul>
            <div class="column-total-duration"></div>
        </div>
        <div class="column" id="saturday-screen-2" data-start-time="08:30:00">
            <div class="column-header">Saturday - Screen 2</div>
            <ul class="sortable-list">
                <!-- Add events here -->
            </ul>
            <div class="column-total-duration"></div>
        </div>
        <div class="column" id="sunday-screen-1" data-start-time="08:30:00">
            <div class="column-header">Sunday - Screen 1</div>
            <ul class="sortable-list">
                <!-- Add events here -->
            </ul>
            <div class="column-total-duration"></div>
        </div>
        <div class="column" id="sunday-screen-2" data-start-time="08:30:00">
            <div class="column-header">Sunday - Screen 2</div>
            <ul class="sortable-list">
                <!-- Add events here -->
            </ul>
            <div class="column-total-duration"></div>
        </div>
    </div>
    <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
    <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/jqueryui-touch-punch/0.2.3/jquery.ui.touch-punch.min.js"></script>
    <script src="js/moment.js"></script>
    <script src="js/script.js"></script>
</body>
</html>


CSS:

body {
    font-family: 'Open Sans', sans-serif;
    font-size: 14px;
}

.container {
    display: flex;
    justify-content: space-between;
    align-items: flex-start;
    padding: 20px;
}

.column {
    width: calc(16.666% - 10px); /* Equal width for 6 columns with 10px gap */
    border: 1px solid #ddd;
    padding: 10px;
}

.column-header {
    font-weight: bold;
    margin-bottom: 10px;
}

.sortable-list {
    list-style: none;
    padding: 0;
    min-height: 50px;
}

.event {
    background-color: #3498db;
    color: white;
    padding: 5px;
    margin: 5px 0;
    cursor: pointer;
}

.event-duration, .event-start-time, .event-end-time {
    font-size: 12px;
}

.event-placeholder {
    background-color: #ddd;
    height: 20px;
    margin: 5px 0;
}

.event-title {
    font-weight: bold;
}

.total-duration {
    color: #666;
    font-size: 12px;
}


JAVASCRIPT:

$(document).ready(function () {
    // Call the sort function on page load to alphabetize the Events column
    sortEventsInEventsColumn();

    $(function() {
        $(".sortable-list").sortable({
            connectWith: ".sortable-list",
            placeholder: "event-placeholder",
            receive: function (event, ui) {
                var $targetColumn = $(this).closest(".column");

                if ($targetColumn.attr("id") !== "events") {
                    // Recalculate start and end times for the entire column
                    recalculateColumnTimes($targetColumn);

                    // Update total duration for the target column
                    updateTotalDuration($targetColumn);
                } else {
                    // Clear start and end times for the dragged event item
                    updateEventTimes(ui.item, "", "");
                }

                // Sort events in the Events column
                sortEventsInEventsColumn();
            },
            update: function (event, ui) {
                var $targetColumn = $(this).closest(".column");

                if ($targetColumn.attr("id") !== "events") {
                    // Recalculate start and end times for the entire column
                    recalculateColumnTimes($targetColumn);

                    // Update total duration for the target column
                    updateTotalDuration($targetColumn);
                } else {
                    // Clear start and end times for the moved event item
                    updateEventTimes(ui.item, "", "");
                }

                // Sort events in the Events column
                sortEventsInEventsColumn();
            },
            remove: function (event, ui) {
                var $sourceColumn = $(this).closest(".column");
                if ($sourceColumn.attr("id") !== "events") {
                    // Recalculate start and end times for the entire column
                    recalculateColumnTimes($sourceColumn);

                    // Update total duration for the source column
                    updateTotalDuration($sourceColumn);
                }

                // Sort events in the Events column
                sortEventsInEventsColumn();
            }
        }).disableSelection();
    });

    function recalculateColumnTimes($column) {
        var $events = $column.find(".event");
        var startTime = $column.data("start-time");

        $events.each(function (index) {
            var $event = $(this);
            var duration = $event.data("duration");
            var endTime = calculateEndTime(startTime, duration);

            // Update start and end times for the event
            updateEventTimes($event, "Start: " + formatTimeAMPM(startTime), "End: " + formatTimeAMPM(endTime));

            // Update startTime for the next event
            startTime = endTime;
        });
    }

    function updateEventTimes($event, startTime, endTime) {
        $event.find(".event-start-time").text(startTime);
        $event.find(".event-end-time").text(endTime);
    }

    function calculateEndTime(startTime, duration) {
        var start = moment(startTime, "HH:mm:ss");
        var dur = moment.duration(duration);
        var end = start.clone().add(dur);
        return end.format("HH:mm:ss");
    }

    function formatTimeAMPM(time) {
        return moment(time, "HH:mm:ss").format("h:mm:ss A");
    }

    function sortEventsInEventsColumn() {
        var $eventsColumn = $("#events");
        var $eventList = $eventsColumn.find(".sortable-list");
        var events = $eventList.children(".event").get();

        events.sort(function (a, b) {
            var titleA = $(a).find(".event-title").text();
            var titleB = $(b).find(".event-title").text();

            // Move the "LeadIn" event to the top of the list
            if (titleA === "LeadIn") {
                return -1; // "LeadIn" comes before other events
            } else if (titleB === "LeadIn") {
                return 1; // Other events come after "LeadIn"
            }

            // Sort other events alphabetically
            return titleA.localeCompare(titleB);
        });

        $.each(events, function (index, event) {
            $eventList.append(event);
        });
    }

    function updateTotalDuration($column) {
        var $eventList = $column.find(".sortable-list");
        var totalDuration = moment.duration();

        $eventList.find(".event").each(function () {
            var duration = moment.duration($(this).data("duration"));
            totalDuration.add(duration);
        });

        var formattedTotalDuration = formatDuration(totalDuration);
        $column.find(".column-total-duration").text("Total Duration: " + formattedTotalDuration);
    }

    function formatDuration(duration) {
        var hours = duration.hours();
        var minutes = duration.minutes();
        return hours + "h " + minutes + "m";
    }
});


from JavaScript Drag and Drop - Dragging Copies

handle dates properly with different timezones in a React App

I have a react application that stores and retrieves datetimes, then disable the unavalaible dates and times.

When the dates are saved from a calendar into the Database, they are one hour late.

When I retrieve the dates from the database, they are 2 hours late.

Imagine I have chosen this dateTime from the calendar : 24 Aug 2023 at 21:30

In the database it is saved as : 2023-08-24 20:30:00.000

When I retrieve it I get : 2023-08-24T19:30:00.000Z

When I wrap it in a Date object I get: Date Thu Aug 24 2023 20:30:00 GMT+0100 (UTC+01:00).

console.log(res.data[0].dateTime) // 2023-08-24T19:30:00.000Z
console.log(new Date(res.data[0].dateTime)) //Date Thu Aug 24 2023 20:30:00 GMT+0100 (UTC+01:00)

When I display it using date-fns I get 20:30 instead of 21:30

<td className="border border-neutral-500 p-3">
        {format(new Date(res.data[0].dateTime), 'kk:mm')}
</td>

Note that when I deployed this app on LWS which has a different timezone, the dates are 2 hours late, and I cannot update the system timezone.

Please help me understand how can I handle these dates properly with different timezones in order to have the same date and times.



from handle dates properly with different timezones in a React App

Is it possible to create a window from a frame?

I have a frame and I want to make a Toplevel window from it. I want to make a system similar to how the web browser UI works. For example, in Google Chrome you can take a tab and make a new window from it. You can also add other tabs to that new window. That is what I have tried to "demonstrate" this behavior:

import tkinter as tk
from tkinter import ttk

root = tk.Tk()
moving_frame = tk.Frame()
moving_frame.pack()

notebook = ttk.Notebook(moving_frame, width=400, height=700)
notebook.pack()

movingFrame = tk.Frame(notebook, bg='green', width=400, height=700)
movingFrame2 = tk.Frame(notebook, bg='green', width=400, height=700)

lab = tk.Label(movingFrame, text='some text')
ent = tk.Entry(movingFrame)

ent2 = tk.Entry(movingFrame2, width=10, bg='grey30')

lab.pack()
ent.pack()
ent2.pack()

notebook.add(movingFrame, sticky='nesw', text='tab1')
notebook.add(movingFrame2, sticky='nesw', text='tab2')

def window_create(e):
    if notebook.identify(e.x, e.y):
        print('tab_pressed')
        frame_to_window = root.nametowidget(notebook.select())
        root.wm_manage(frame_to_window)

notebook.bind('<ButtonRelease-1>', window_create)
root.mainloop()

Note: This code works by pressing the tab, not by dragging it.

I wonder if I can somehow adjust the window that is created by using wm_manage. This function return None and works not as I thought, but close enough to what I need.

If that is not possible to make by using wm_manage, how should I do that?

I thought I can create a custom class, for example from a Frame, but there is a problem: new window should be the same to the frame that it is created from. If something has been changed by a user, e.g., User added a text to the Entry; Marked some checkbuttons; If he has used a Treeview hierarchy, like this:

, the new window should remember which "folders" are opened (that are marked as minus on the picture), which items are currently selected and so on. And that was only about the Treeview, similar things should be considered for every single widget. Quite a lot of work, so maybe there is a better way to do that.



from Is it possible to create a window from a frame?

Flask server won't start when I include logging.basicConfig(filename='whatever.log')

I have a flask app in which I'm implementing logging into a file. The code is as follows:

from flask import request, Flask
import os
from werkzeug.utils import secure_filename
import logging

app = Flask(__name__)

logging.basicConfig(filename='fileStorage.log', level=logging.DEBUG)                                                                                                                                       

print('helloworld')

When I enter >flask run I get the following response:

* Serving Flask app "PostData.py"
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
helloworld

For some reason, the text * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) Doesn't appear. It will appear, however, if I don't set the 'filename' parameter in logging.basicConfig

What am I missing here?



from Flask server won't start when I include logging.basicConfig(filename='whatever.log')

Friday 25 August 2023

Library Sentence-Transformers

I am trying to load a model from the library sentence-transformers. When I execute the code, I get this output. As we can see, the bar progress is 0%, and the command has finished. I think the model is not loaded correctly, because I can obtain embeddings from texts but I cannot fine tune the model. Does anyone know if there is a problem with the servers of this library?



from Library Sentence-Transformers

Error when running py2app init_import_site: Failed to import the site module

I am currently dabbling in Python. I have now created a mini project for testing. For this I have installed PySide6 and built a none window:

`from PySide6.QtWidgets import QApplication, QMainWindow
from widgets.main import Ui_frmMain

class FrmMain(QMainWindow, Ui_frmMain):
def __init__(self):
super().__init__()
self.setupUi(self)

app = QApplication()
frm_main = FrmMain()
frm_main.show()
app.exec()`

Now I wanted to create a macOS app with py2app

`python3 setup.py py2app -A`

After running it I wanted to test the file and get the following error message

`Fatal Python error: init_import_site: 
Failed to import the site module
Python runtime state: initialized
Traceback (most recent call last):
File "/Users/xxx/Python3/python-projekte/beginner/dist/main.app/Contents/Resources/site.py",     line 182, in <module>
import sitecustomize  # noqa: F401
File "/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/sitecustomize.py", line 38, in <module>
site.PREFIXES[:] = [new_prefix if x == sys.prefix else x for x in site.PREFIXES]
AttributeError: partially initialized module 'site' has no attribute 'PREFIXES' (most likely due to a circular import)`

Do you have any idea?

When searching for the cause in the forum I had leide rkeine success



from Error when running py2app init_import_site: Failed to import the site module

Implementing Name Synchronization and Money Transfers in Transactions Model with Account Number Input

I have the following models in my Django application:

class Transaction (models.Model):
    user = models.ForeignKey(User, on_delete=models.CASCADE)
    account_number = models.IntegerField()
    name = models.CharField(max_length=50)
    amount = models.DecimalField(max_digits=5, decimal_places=2)
    created_on = models.DateTimeField()

class Wallet(models.Model):
    user = models.ForeignKey(User, on_delete=models.CASCADE)
    account_balance = models.DecimalField(max_digits=5, decimal_places=2, default=0)

class AccountNum(models.Model):
    user = models.ForeignKey(User, on_delete=models.CASCADE)
    account_number = models.IntegerField()
    slug = models.SlugField(unique=True)

I want to implement a feature where the name field in the Transactions model gets synchronized with the account owner's name based on the provided account_number input. Additionally, I want to enable money transfers using the current user's wallet and the specified amount in the Transactions model.

To provide some context, I have a post-save signal generate_account_number which generates a random 10-digit account number.

What are some recommended techniques or approaches to achieve this synchronization of the name field with the account owner's name and enable money transfers using the wallet model and specified amount in the Transaction model?

This is what I have try.

I try to use django-channels for synchronization, But I have failed to do so. I tried to copy from some people by Googling how to do that, but I don't think I can do this alone. Here is what I have tried:

consumers.py

from channels.generic.websocket import AsyncWebsocketConsumer
import json
from .models import AccountNum

class TransferConsumers(AsyncWebsocketConsumer):
    async def connect(self):
        await self.connect()

    async def disconnect(self, close_data):
        pass

    async def receive(self, text_data):
        data = json.load(text_data)
        account_number = data['account_number']

        try:
            account = AccountNum.objects.get(account_number=account_number)
            access_name = account.user.username
        except AccountNum.DoesNotExist:
            access_name = 'Name Not found'

routing.py

from django.urls import re_path
from . consumers import TransferConsumers

websocket_urlpatters = [
    re_path('/ws/access/', TransferConsumers.as_asgi()
]

The reason I didn't add my template was due to the JavaScript code required to make that happen. I don't have experience in JavaScript, but I would like to learn more about it.

views.py:

def make_transfer(request):
    if request.method == 'POST':
        account_number = request.POST.get('account_number')
        name = request.POST.get('name')
        amount = request.POST.get('amount')
    return render(request, 'Profile/make_transfer.html')


from Implementing Name Synchronization and Money Transfers in Transactions Model with Account Number Input

install latest xgboost nightly build

I'd like to install the latest xgboost nightly build. The documentation indicates that the latest builds can be found here: https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/list.html?prefix=master/

Getting the name of the latest version you can then use pip in the following maner (exemple):

!pip install https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/master/xgboost-2.0.0.dev0%2B15ca12a77ebbaf76515291064c24d8c2268400fd-py3-none-manylinux2014_x86_64.whl

Is there any way to just specify 'latest nightly build' some way or another instead of having to copy past the commit key ?



from install latest xgboost nightly build

Thursday 24 August 2023

Where is the bottleneck in these 10 requests per second with Python Bottle + Javascript fetch?

I am sending 10 HTTP requests per second between Python Bottle server and a browser client with JS:

import bottle, time
app = bottle.Bottle()
@bottle.route('/')
def index():
    return """<script>
var i = 0;
setInterval(() => {
    i += 1;
    let i2 = i;
    console.log("sending request", i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            console.log("finished processing", i2);
        });
}, 100);
</script>"""
@bottle.route('/data')
def data():
    return "abcd"
bottle.run(port=80)

The result is rather poor:

sending request 1
sending request 2
sending request 3
sending request 4
finished processing 1
sending request 5
sending request 6
sending request 7
finished processing 2
sending request 8
sending request 9
sending request 10
finished processing 3
sending request 11
sending request 12

Why does it fail to process 10 requests per second successfully (on an average i5 computer): is there a known bottleneck in my code?

Where are the 100 ms lost per request, that prevent the program to keep a normal pace like:

sending request 1
finished processing 1
sending request 2
finished processing 2
sending request 3
finished processing 3

?

Notes:

  • Tested with Flask instead of Bottle and the problem is similar

  • Is there a simple way to get this working:

    • without having to either monkey patch the Python stdlib (with from gevent import monkey; monkey.patch_all()),

    • and without using a much more complex setup with Gunicorn or similar (not easy at all on Windows)?

    ?



from Where is the bottleneck in these 10 requests per second with Python Bottle + Javascript fetch?

Can I set a specific drive (e.g,"D:\") as a default directory for showOpenFilePicker?

We have a Java (JSP) application and I want (well, the client) wants when they save a file from the system, the file to be downloaded automatically by default in the USB drive. Also the same for when they are uploading any files to the system.

I know that this is not possible due to security reasons but I recently saw about this File System Access API and I am trying to understand what is possible.

Well, apparently for Downloading we can use

async function fileOpen() {
            const fileHandle = await window.showOpenFilePicker({
                startIn: 'documents'
            });
        }

but in the startIn property whenever I try setting D:\ or anything other than the WellKnownDirectories I get the error:

TypeError: Failed to execute 'showOpenFilePicker' on 'Window': Failed to read the 'startIn' property from 'FilePickerOptions': The provided value 'D:\' is not a valid enum value of type WellKnownDirectory.

So is something like that possible? What about when uploading a file? The file picker that opens from the <input type=file> doesn't support setting any value for the default directory.

Keep in mind the this is a controlled environment and the existence of the D: drive is ensured

Is there maybe any other way, other than JavaScript?



from Can I set a specific drive (e.g,"D:\") as a default directory for showOpenFilePicker?

How to create react UI library having mix of client and server components?

I want to create a simple library that exports some server components and some client components. It should be compatible with Nextjs app router. But when I build, the client components don't work.

I tried building the library with tsup. I can fix the client components issue by adding

 banner: {
     js: `"use client";`,
 },

to the tsup.config.ts. However, once I do that, server components break.



from How to create react UI library having mix of client and server components?

How Can I Manually Revalidate a Dynamic Next.js Route, From an API Route?

I have a Next.js application with a dynamic page for displaying resources: /resource/[id]. Whenever a user edits (say) resource #5, I want to regenerate the page /resource/5 in Next's cache.

I have an API route (in my /pages directory) that handles the resource editing. From what I've read, I should be able to make it refresh the display route by doing:

response.revalidate(`/resource/${id}/`);

However, that doesn't work; I get the error:

Error: Failed to revalidate /resource/2153/: Invalid response 200
    at revalidate (/home/jeremy/GoblinCrafted/next/node_modules/next/dist/server/api-utils/node.js:388:15)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
- error unhandledRejection: Error: Failed to revalidate /resource/2153/: Invalid response 200
    at revalidate (/home/jeremy/GoblinCrafted/next/node_modules/next/dist/server/api-utils/node.js:388:15)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  digest: undefined

I guess Next is trying to revalidate through an HTTP request, but the revalidation endpoint moved? I'm not exactly sure why this fails.

EDIT: I dug into the Next source code, and it seems like revalidate is just making a mock request to the route that is being revalidated. If you look at node_modules/next/dist/server/router.js you'll see:

async revalidate({ urlPath , revalidateHeaders , opts  }) {
        const mocked = (0, _mockrequest.createRequestResponseMocks)({
            url: urlPath,
            headers: revalidateHeaders
        });
        const handler = this.getRequestHandler();
        await handler(new _node.NodeNextRequest(mocked.req), new _node.NodeNextResponse(mocked.res));
        await mocked.res.hasStreamed;
        if (mocked.res.getHeader("x-nextjs-cache") !== "REVALIDATED" && !(mocked.res.statusCode === 404 && opts.unstable_onlyGenerated)) {
            throw new Error(`Invalid response ${mocked.res.statusCode}`);
        }

My error is coming from that throw at the end ... but that doesn't make any sense to me, because when I log urlPath, it's just the path I'm trying to refresh (eg. /resource/5). When I try to hit that path with a GET request (either in my browser or through Postman) I don't get a redirect: I get a 200.

This led me to try adding a / to the end of my path:

response.revalidate(`/resource/${id}/`);

That got me a similar error, only this time it was for a 200 instead of a 308:

Error: Invalid response 200

So, it seems my status code doesn't actually matter: it's the mocked.res.getHeader("x-nextjs-cache") !== "REVALIDATED" part that's the problem. However, digging through the code, I only found one place that sets that header. It happens within a function within a function within a renderToResponseWithComponentsImpl function in node_modules/next/dist/esm/server/base-server.js:

if (isSSG && !this.minimalMode) {
        // set x-nextjs-cache header to match the header
        // we set for the image-optimizer
        res.setHeader("x-nextjs-cache", isOnDemandRevalidate ? "REVALIDATED" : cacheEntry.isMiss ? "MISS" : cacheEntry.isStale ? "STALE" : "HIT");
 }

... but when I add console.log statements it seems that code isn't being reached ... even though I see lots of other requests reaching it, and isSSG && !this.minimalMode is true for all of them.

So, in short, somehow the mock request revalidate makes isn't triggering the setting of that header, which then makes it fail ... but I have no clue why it's not getting to that header-setting code, because it's so deeply nested in the router code.

END EDIT

I also tried using revalidatePath, from next/cache:

revalidatePath(`/resource/[id]`);

but that also gives an error:

revalidatePath(`/resource/[id]`);

Error: Invariant: static generation store missing in revalidateTag /resource/[id]
    at revalidateTag (/home/me/project/next/node_modules/next/dist/server/web/spec-extension/revalidate-tag.js:15:15)
    at revalidatePath (/home/me/project/next/node_modules/next/dist/server/web/spec-extension/revalidate-path.js:13:45)
    at handler (webpack-internal:///(api)/./pages/api/resource.js:88:67)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

I think this is because revalidatePath is only intended to be used in the /app directory?

Finally, I found a reference to a response.unstable_revalidate method, which seemed to be designed for revalidating dynamic paths:

    response.unstable_revalidate(`/resource/${id}/`);

... but when I try to use it, it isn't there on the response:

TypeError: response.unstable_revalidate is not a function
    at handler (webpack-internal:///(api)/./pages/api/resource.js:88:18)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)


from How Can I Manually Revalidate a Dynamic Next.js Route, From an API Route?

Error retrieving image from URL in Google Apps Script but only on some spreadsheets

I have Google Sheets spreadsheets that perform a lot of (small file-size) image retrieval using SpreadsheetApp.newCellImage() via code similar to the below:

function myFunction() {
  const sheet = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
  const image = SpreadsheetApp.newCellImage()
    .setSourceUrl("https://static-00.iconduck.com/assets.00/perspective-dice-random-icon-469x512-mm6xb9so.png")
    .setAltTextTitle("Image")
    .build();

  sheet.getRange("A1").setValue(image);
}

Just recently I noticed that these spreadsheets would get an error when trying to apply the image to a cell:

Exception: Error retrieving image from URL or bad URL

This would happen with ANY image I try and on ANY cell. It would fail at the setValue(image) call.

Here's the kicker. If I tried running the exact function above in a NEW spreadsheet, it would work with no issues. But running it in my existing spreadsheets would fail.

If I try and duplicate my existing spreadsheet (File > Make a copy) it would also fail on the duplicated spreadsheet. I also looked at usage limits for Google Sheets but didn't find anything relevant to my issue.

Video reproduction of the issue: https://imgur.com/a/qJfCC4N

Note how I run the above function in a completely new spreadsheet and it works, but I copy-paste it into an existing spreadsheet that is causing errors and it fails on that spreadsheet.

Any ideas?



from Error retrieving image from URL in Google Apps Script but only on some spreadsheets

Wednesday 23 August 2023

Microphone Permission of Flask app in python-for-android and buildozer

I'm converting a flaskapp combined with SpeechRecognition Javascript into an apk (I get the html here) . And i tried to add the Microphone permission by adding:

Python:

from android.permissions import Permission, request_permissions 
request_permissions([Permission.INTERNET,Permission.MODIFY_AUDIO_SETTINGS,Permission.RECORD_AUDIO])

Buildozer.spec

android.permissions = INTERNET,RECORD_AUDIO,MODIFY_AUDIO_SETTINGS
android.api = 30

But when i run apk, i still got not-allowed error in javascript.

Thanks For Any Helps!



from Microphone Permission of Flask app in python-for-android and buildozer

Loop over audio frames and store in a queue/container

I have a requirement where I need to build something like a streaming program, where a video which is divided into multiple chunks is received by my program at some regular interval; say 2s. And each of my video which contains audio as well is of 10s. Now whenever I receive the video I add it to a global or shared container from which I keep reading and play the video in python continuously without the user experiencing any lag.

The program is not built yet and I am still figuring out the bits and pieces. In the below code I have tried something similar by going though multiple answers and blogs. I am doing it for a single video currently looping over entire video/audio frames and storing it in a container. However this approach using ffpyplayer does not work and for some reason there is no way to go through audio frames one by one like we do for video frames.

How can I achieve the same? In the below program how can I store both video and audio frames first and then go over them and make them play/display?

from ffpyplayer.player import MediaPlayer
from ffpyplayer.pic import Image
import cv2

video_path  = "output_1.mp4"
player = MediaPlayer(video_path)

# Initialize a list to store frames and audio frames
media_queue = []  # A temporary queue to store frames and audio frames
frame_index = 0   # To keep track of the frame index

cap = cv2.VideoCapture(video_path)

while True:
    ret, frame = cap.read()
    audio_frame, val_audio = player.get_frame()   # Get the next audio frame

    if ret != 'eof' and frame is not None:
        img = frame 
        media_queue.append(('video', img))
    if val_audio != 'eof' and audio_frame is not None:
        audio = audio_frame[0] 
        media_queue.append(('audio', audio))

    if ret == 'eof' and val_audio == 'eof':
        break

    frame_index += 1

print(f"Total frames and audio frames processed: {frame_index}")

# Play Frames and Audio from Queue
for media_type, media_data in media_queue:
    if media_type == 'video':
        img_data = Image(media_data[0], media_data[1])
        cv2.imshow("Frame", img_data.image)  # Display the frame
    elif media_type == 'audio':
        player.set_volume(1.0)
        player.set_audio_frame(media_data)

    if cv2.waitKey(25) & 0xFF == ord("q"):  
        break

cv2.destroyAllWindows() 

I know the above program is wrong. Is there any way to stream both video and audio by storing them into some container first and then looping through it? Even if someone could help me with an algorithm and explanation of what I should look at I can try making it work.



from Loop over audio frames and store in a queue/container