Wednesday, 1 March 2023

Unable to log in to a website using the requests module

I'm trying to log in to a website using requests module. It seems I have incorporated the manual steps into the script based on what I see in dev tools while logging in to that site manually. However, when I run the script and check the content it received as a response, I see this line: There was an unexpected error.

I've created a free account there for the purpose of testing only. The login details are hardcoded within the parameters.

import requests
from bs4 import BeautifulSoup

link = 'https://www.apartments.com/customers/login'
login_url = 'https://auth.apartments.com/login?{}'
params = {
    'dsrv.xsrf': '',
    'sessionId': '',
    'username': 'shahin.iqbal80@gmail.com',
    'password': 'SShift1234567$'
}

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36',
}

headers_post = {
    'origin': 'https://auth.apartments.com',
    'referer': '',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9,bn;q=0.8',
    'x-requested-with': 'XMLHttpRequest'
}
with requests.Session() as s:
    s.headers.update(headers)
    resp = s.get(link)
    soup = BeautifulSoup(resp.text,"lxml")
    res = s.get(soup.select_one("#auth-signin-iframe")['src'])
    soup = BeautifulSoup(res.text,"lxml")
    post_url = login_url.format(soup.select_one("[id='signinform']")['action'].split("/login?")[1])
    headers_post['referer'] = post_url
    s.headers.update(headers_post)
    params['dsrv.xsrf'] = soup.select_one("input[name='idsrv.xsrf']")['value']
    params['sessionId'] = soup.select_one("input[id='sessionId']")['value']
    resp = s.post(post_url,data=params)
    print(resp.status_code)
    print(resp.content)
    print(resp.url)

How can I make the login successful using the requests module?



from Unable to log in to a website using the requests module

Bootloader doesn't change reset vector

I am using MPLAB IDE, MPLAB XC8 v6 Compiler, PIC18F25Q10. We are working with a bootloader project. While in the bootloader phase, it is expected to update the application and jump to a certain address (example code offset = 0x1000). However, we are having problems with reset vector, interrrupt vectors. We cannot move the reset vector and interrupt vectors to an address we want.

We did some research on the forums, Low and High Interrupt vector addresses can be changed with a command like "IVTBASE", but it is not used in the processor we use (PIC18F25Q10). A different way is to try the following line of code.

#define PROG_START  0x1000  
asm("PSECT reset_vector,class=CODE,delta=2,abs");  
asm("ORG  0x00"); 
asm("GOTO " \___mkstr(PROG_START)" + 0x00");
asm("PSECT HiVector,class=CODE,delta=2,abs");  
asm("ORG  0x08");  
asm("GOTO " \___mkstr(PROG_START)" + 0x08");  
asm("PSECT LoVector,class=CODE,delta=2,abs");  
asm("ORG  0x18"); 
asm("GOTO " \___mkstr(PROG_START)" + 0x18");

When we load this code into the processor via ICD4, it works correctly with the vector addresses correctly moved to address 0x1000. However, it does not work when we throw it through the bootloader application. Our hex output is as in the photo below.

enter image description here

We have a problem when we print this hex file with the bootloader application. For example, the reset vector address of the first 3 lines in the hex file is 0x0000, High Vector address = 0x0008, Low vector address = 0x0018 . When we print in this way, these vector addresses and bootloader application addresses conflict. We print to flash in the bootloader application.

How can we move the reset vectors through the bootloader application correctly? thank you.



from Bootloader doesn't change reset vector

Flatten and extract keywords from json field in csv

I have a column named diff in my df, where the value is like a json string of the format:

{'info': {'version': {'from': '2.0.0', 'to': '2.3.4'}}, 'paths': {'modified': {'/dummy': {'operations': {'added': ['PUT']}}}}, 'endpoints': {'added': [{'method': 'PUT', 'path': '/dummy'}]}, 'components': {'schemas': {'added': ['ObjectOfObjects', 'inline_object', 'ObjectOfObjects_inner']}, 'requestBodies': {'added': ['inline_object', 'nested_response']}}}

Over here info, paths, endpoints and components represent the first set of nested elements. Like the first category, then we have the next category: such as info has different fields like: title, description etc, components has fields like: schemas and so on.

The df column looks something like this: enter image description here

I want to flatten the json,meaning divided all the parameters, so that means I get around 5-6 new columns( as these are all the parameters changing,first set of elements). I don't want to keep the changes like you can see in the pic from:... to:.., I only want the field, sub-field and the sub-sub-field changed

so I get output as something as follows:

info      paths      endpoints  components
version    modified   added      schemas:added
                                 requestBodies:added

I looked into json_normalize , flatten and jsonpath, but somehow neither of these work for the use case. It yields a completely different output to the one I want. It would be really great if someone could help me with this! I seem to be a bit stuck.



from Flatten and extract keywords from json field in csv