Sunday, May 16, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


Why aren't my bootstrap popovers working correctly?

Posted: 16 May 2021 08:31 AM PDT

Consider the below example. Although bootstrap popovers seem like they're working, they just fail to work correctly. I've found numerous codepen examples of working popovers but when i paste them into my page they fail.

I also made very sure to call jQuery before bootstrap. I've also tried using the bootstrap bundle as well as individual calls

The big issue is that the "Find google Tag manager IDs" btn should trigger a popover (which it does) but the popover content is blank!

Next, the "Find facebook pixel ids" btn should trigger a popover on hover, but it only works on click instead? And also, the content is empty .

I feel like I'm missing something obvious here, any assistance would be helpful.

 <!DOCTYPE html>  <html lang="en">  <head>        <!-- begin libraries -->      <!-- jQuery-->      <script src="https://code.jquery.com/jquery-3.6.0.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script>      <!-- Bootstrap CSS -->      <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-+0n0xVW2eSR5OomGNYDnhzAbDsOXxcvSN1TPprVMTNDbiYZCxYbOOl7+AMvyTG2x" crossorigin="anonymous">      <!-- Bootstrap Popper -->      <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.9.2/dist/umd/popper.min.js" integrity="sha384-IQsoLXl5PILFhosVNubq5LC7Qb9DXgDA9i+tQ8Zj3iwWAwPtgFTxbJ8NT4GN1R8p" crossorigin="anonymous"></script>      <!-- Bootstrap main -->      <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.1/dist/js/bootstrap.min.js" integrity="sha384-Atwg2Pkwv9vp0ygtn1JAojH0nYbwNJLPhwyoVbhoPwBhjQPR5VtM2+xf0Uwh9KtT" crossorigin="anonymous"></script>      <!-- google fonts -->      <link rel="preconnect" href="https://fonts.gstatic.com">      <link href="https://fonts.googleapis.com/css2?family=Open+Sans:ital,wght@0,300;0,400;0,600;0,700;0,800;1,300;1,400;1,600;1,700;1,800&family=Roboto&display=swap" rel="stylesheet">      <!-- End libraries -->      <!--start CSS -->      <style>          hr {              color:white;          }          body {              background-color: #5e007a !important;              font-family: "Open Sans" !Important;          }      </style>      <!-- end CSS-->      <meta charset="UTF-8">      <meta name="viewport" content="width=device-width, initial-scale=1">      <title>The Auditor</title>  </head>  <body>  <div id="bodyMain" class="container">      <div class="row">          <div class="card bg-dark shadow-lg mb-5 mt-3 border-2 border-secondary" >              <div class="card-header bg-primary m-3 shadow-lg border border-warning">                  <h1 class="text-black-50 text-center fw-bolder">Welcome to the Auditor</h1>              </div>              <div class="card-body">                  <h3 class="text-primary text-center font-italic">Please allow us to try to ease your audit burdens.</h3>              </div>          </div>      </div>      <div class="row">          <div class="card bg-dark shadow-lg border-2 border-secondary">              <div class="card-header bg-primary m-3 shadow-lg border border-warning">                  <h3 class="text-black-50 text-center fw-bolder">Let's Get You Going</h3>              </div>              <div class="card-body">                  <form>                      <div class="row mt-5">                          <p class="text-success fw-bold text-center">Tell me, How can I help you today?</p>                          <button type="button" class="btn btn-block btn-outline-success mb-2" data-toggle="popover" title="Click Popover title" data-content="This content should display when you click the btn" data-container="body">Find Google Tag Manager IDs</button> <!-- although the popover works, the content doesn't show, only the title.-->                          <button type="button" class="btn btn-block btn-outline-success mb-2" data-toggle="popover" title=" Hover Popover title" data-content="This content should display when you hover over the btn" data-trigger="hover">Find Facebook Pixel IDs</button>                          <button type="button" class="btn btn-block btn-outline-success mb-2">Retrieve a sitemap</button>                      </div> <!-- end row-->                      <hr>                      <p class="text-success fw-bold text-center">Or, are you looking to manually audit some sites? Which application would you like to use?</p>                      <div class="row">                          <div class="btn-group" data-toggle="buttons">                              <label class="btn btn-primary p-3">                                  <input type="radio" name="options" id="option1"> CMS                              </label>                              <label class="btn btn-primary p-3">                                  <input type="radio" name="options" id="option2"> Composer                              </label>                          </div>                      </div> <!-- end row-->                      <div class="row">                          <p class="text-success fw-bold text-center mt-4">Would you like us to open to a specific page?</p>                          <input placeholder="/new-inventory/index.htm">                      </div><!-- end row-->                  </form>              </div><!-- end card-body-->          </div> <!-- end card -->      </div> <!-- end row-->  </div> <!-- end main body container -->  <script>      $(function () {          $('[data-toggle="popover"]').popover();          console.log($.fn.tooltip.Constructor.VERSION) //outputs to the console - proof that jQuery is running      })  </script>  </body>  </html>  

Problem with sending multipartout by resteasy to spring boot app

Posted: 16 May 2021 08:31 AM PDT

I have problem with sending file by resteasy to spring boot app. I send file between apps and on the side spring boot app file is not retrieving. I have error: .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.web.multipart.support.MissingServletRequestPartException: Required request part 'file' is not present] In debbug I can see file in request on resteasy side. I tried with different MediaTypes. Any idea how I can send file between this apps?

JavaEE with RESTEASY

        ResteasyClient client = (ResteasyClient) ClientBuilder.newClient();      ResteasyWebTarget target = client.target(FILE_CONVERT_URL);      client.register(new LoggingFilter());        var mdo = new MultipartFormDataOutput();      mdo.addFormData("file", file,              MediaType.MULTIPART_FORM_DATA_TYPE);      return target.request().post(Entity.entity(mdo, MediaType.MULTIPART_FORM_DATA_TYPE),       ConversionResult.class);  

curl to apps:

curl --location --request POST 'http://localhost:8085/api/convert' \  

--header 'accept: /'
--header 'Content-Type: multipart/form-data'
--form 'file=@"/C:/source/file.json"'
--form 'fileName="text.txt"'

endpoint in spring boot:

      @PostMapping("/convert")    public FileConverterResponse convert(    @RequestPart(value = "file") MultipartFile file,    @RequestParam("fileName") String destinationFileName)    throws IOException {  return new FileConverterResponse(   fileConverterProcessor.process(file, destinationFileName));  }            

How to create a standalone service in Android?

Posted: 16 May 2021 08:31 AM PDT

I want to create a standalone service on android which will be in an idle state by default and can be started/stopped from my other App. Is this possible on Android. Any clues on what to search for would be helpfull. Thanks in advance.

Pgadmin server not contacted

Posted: 16 May 2021 08:30 AM PDT

I just installed PostgreSQL 13 on my windows 10 pc and when I opened pgadmin 4 I got this error

image of error

please help me to solve this problem

Save state as a python file or as a text file?

Posted: 16 May 2021 08:30 AM PDT

I'm running a simulation and want to save the state of the variables once every while. I wondered if there are good reasons to prefer either writing the values to a text file and evaluate when extracting or saving them to a python file and (maybe dynamically) import it when needed.

PDFBOX - Combining PDF UA into ON Large File - PDF UA Tags get nested

Posted: 16 May 2021 08:30 AM PDT

when I use the PDFBOX and the PDFMergeUtility with either appendDocument or mergeDocuments, I noticed the the Tag Structure of each individual document is nested under . If I merge 7 documents, it appears the tag is nested under another 7 times. Is this by design ? Is there away to merge the documents so the tags are flattened and not nested ? The reason for merging the document is to eventually load the document into a content management using PPD and the Content Management will allow the user to retrieve each document separately. THe JAWS reader can still read the document but I noticed the Document Tags are nested heavily and could cause a performance issue.

Print Model Hyperparameters in PyTorch-lightning

Posted: 16 May 2021 08:30 AM PDT

How can I print model hyperparameters in PyTorch-lightning?

model.hparams doesn't return weights and biases.

When I checked the doc. I find only a way to save hparams args.

Is there something like model.get_Weights() in TensorFlow?

This is my Model class:

import torch  from torch import nn  import pytorch_lightning as pl  import numpy as np    class Model(pl.LightningModule):      def __init__(self, model):          super().__init__()          self.model= model  ...    NN = nn.Sequential(          nn.Linear(1, 10),          nn.Tanh(),          nn.Linear(10, 1))    model= Model(NN)  trainer = pl.Trainer(max_epochs=10, weights_summary='full')  trainer.fit(model, train_dataloader=trainloader)  

Migrate Puppet provisioning into Ansible

Posted: 16 May 2021 08:30 AM PDT

So currently my company is still using Puppet for server provisioning. It's being used either on AWS ec2 instances or even on-premise servers. For on-premise, every time we provisioned a new vm using foreman, it will automatically trigger curl to puppet server and subsequently initiate all basic configuration for eg ssh config, sssd, monitoring agent installation, set up ssh key for admins and etc. Almost similar things for AWS as well but the difference is that it's triggered as part of terraform ec2 instance metadata.

Right now, I am looking to migrate all the puppet into Ansible provisioning but unsure what would be the best practices for first-time provisioning. For puppet, once the agent is installed, it's quite straightforward as all the installation will be decided automatically by puppet master depends on the hierarchy configuration. So we can even use a predefined hierarchy based on the hostname, site, OS, and etc in puppet.

Unlike puppet, ansible needs an ssh key or password in order to run provisioning on the new server. that shouldn't be a big issue since I can just insert the ssh key during the provisioning. But even so, I am still having a hard time deciding on how I can migrate all the different hierarchy provisioning being supported by Puppet to be run by Ansible instead. Would it be better to separate it into multiple playbooks or probably better to just merge all the scripts into the single repo and run everything during the provisioning? I have looked into Ansible AWX/Tower as well but not sure if it will really help to solve this issue as well or not

Would appreciate any kind advice or suggestions from some folks here that might have a better experience/knowledge with Ansible or probably some peoples with expertise in both Puppet and Ansible. Thanks before

How to connect corners detected by Harris to build a graph

Posted: 16 May 2021 08:30 AM PDT

I have used Harris Corner Detection algorithm to find the corners in an image. Now, I want to connect specific corners straightly to build a graph. I also have used Canny Edge Detection, and the corners should be connected together based on the paths defined by the edges. Any idea how should I connect these corners?

How to use gRPC with golang echo framework?

Posted: 16 May 2021 08:30 AM PDT

I am trying to perform inter-service communication between microservices. I followed the documentation and it was successful. Then, I tried to establish the same with echo framework. But that gives me an invalid memory address when trying to call the gRPC registered method.

 rpc error: code = Unavailable desc = connection closedpanic:   runtime error: invalid memory address or nil pointer dereference   [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7eacad]  

Is it possible to get databases on pgadmin4 after a windows reset (keep files option)?

Posted: 16 May 2021 08:30 AM PDT

Is it possible to get databases on pgadmin4 after a windows reset (keep files option) ? I do not have files with pg_dump. The directory was default. Thank you.

Import .Bak file in MySQL

Posted: 16 May 2021 08:32 AM PDT

I am facing a problem in importing a bak file in MySQL? I don't know the process, can anyone guide me>

How to store the table contents in an array every time the for loop executes in protractor?

Posted: 16 May 2021 08:30 AM PDT

let datesInColumns: string;      for(let i=1;i<totalRows;i++) {     datesInColumns = await element.all(by.tagName('tr')).get(i).all(by.tagName("td")).get(0).getText();   }    console.log(datesInColumns , 'list of data taken from the table');    

So here is what am trying with the code

i want to get the column information from the table,everytime the for loop runs the new data retrieved is being stored in datesInColumns. right now this is how it looks

consider the table contents are 1,2,3,4. so am expecting the datesInColumns to store all the values ie, 1,2,3,4. Right now it is storing the recently executed for loop data ie, 4. Could someone suggest me how to store all these values in a variable everytime the for loops executes? **

how do can i get src link text using selenium?

Posted: 16 May 2021 08:30 AM PDT

browser = webdriver.Chrome()  browser.browser.find_elements_by_xpath("//div[@class='a image']/img")  

the html look like this

<div class="a image"><imgsrc="https://cloudfront.net/"></div>  

how to get the src text ?

How to delete element from list in c++

Posted: 16 May 2021 08:31 AM PDT

I am currently making a singly linked list in C++ Now I'm trying to make a function showList that prints the content of the list and if it is empty, prints "Empty list". However, right now it prints the list and "Empty list" every single time. When the list is empty, it prints an empty line and in new line "Empty list" Here is my current code:

template <typename T>  struct Node {      T data;      Node* next;  };    template <typename T>  void showList(const Node<T>* head){            while (head != nullptr){          std::cout << head->data << " " ;          head = head->next;      }      std::cout << std::endl;        if(head->data = 0){         std::cout << "Empty list"<< std::endl;      }    }  

AfterAll TypeError: Cannot read property 'x' of undefined - Angular unit testing

Posted: 16 May 2021 08:30 AM PDT

I tried to recreate part of the code that we use in a real project and in which after run ng test I get this error - "AfterAll TypeError: Cannot read property 'isCompleted' of undefined.

enter image description here

Unit Test Spec file.

import { HttpClientTestingModule } from '@angular/common/http/testing';  import { CUSTOM_ELEMENTS_SCHEMA } from '@angular/core';  import { ComponentFixture, TestBed } from '@angular/core/testing';  import { EventService } from 'src/app/event.service';    import { UnitTestComponent } from './unit-test.component';    describe('UnitTestComponent', () => {    let component: UnitTestComponent;    let fixture: ComponentFixture<UnitTestComponent>;    let eventService: EventService;      beforeEach(async () => {      await TestBed.configureTestingModule({        declarations: [UnitTestComponent],        imports: [HttpClientTestingModule],        schemas: [CUSTOM_ELEMENTS_SCHEMA]      }).compileComponents();    });      beforeEach(() => {      fixture = TestBed.createComponent(UnitTestComponent);      component = fixture.componentInstance;      eventService = TestBed.inject(EventService);      fixture.detectChanges();    });      it('should create', () => {      expect(component).toBeTruthy();    });  });  

Unit Test Component

import { Component, OnInit } from '@angular/core';  import { EventService } from 'src/app/event.service';    @Component({    selector: 'app-unit-test',    templateUrl: './unit-test.component.html',    styleUrls: ['./unit-test.component.scss']  })  export class UnitTestComponent implements OnInit {      formStatus: string = "formId2";    isApproval: boolean = false;      constructor(private eventService: EventService) { }      ngOnInit(): void {      this.getForms();    }      getForms(): void {      this.eventService.forms.subscribe(res => {        if (res) {          this.isApproval = res.find(f => f.formId == this.formStatus).isCompleted;        }      })    }  }  

UnitTestService

import { Injectable } from '@angular/core';  import { HttpClient } from '@angular/common/http';    @Injectable({      providedIn: 'root'  })  export class UnitTestService {      private URL = 'http://localhost:4200/assets/db.json';      constructor(private http: HttpClient) { }      // Make the HTTP request:        getData() {          return this.http.get(this.URL);      }    }  

Event Service

import { Injectable } from '@angular/core';  import { BehaviorSubject } from 'rxjs';    @Injectable({      providedIn: 'root'  })  export class EventService {      private _formEventSub = new BehaviorSubject<any[]>([]);      public forms = this._formEventSub.asObservable();        updateValue(data: any[]) {          this._formEventSub.next(data);      }  }  

Can anyone tell me why it happens so thank you.

Select and deselect values with react and hooks

Posted: 16 May 2021 08:31 AM PDT

I am trying to change the state by selecting and deselecting the language option in the code below. So far I can update the state by adding a language, my problem is that, if I click on the same language again, I will add it another time to the array. Can anyone explain me how to add or remove the language from the array when clicked one more time?

export default function Dashboard(props) {    const [language, setLanguage] = useState('');    const handleLanguageChange = changeEvent => {      changeEvent.persist()      setLanguage(prevState => [...prevState, changeEvent.target.value])    };    return (  <label htmlFor="language">Wählen Sie die Sprache</label>  <select multiple={true} value={language} onChange={handleLanguageChange} name="language" id="language" >      <option value="Deutsch">Deutsch</option>      <option value="Englisch">Englisch</option>        </select>  )  }  

Produce Avro message from using classes

Posted: 16 May 2021 08:31 AM PDT

As of now am creating avro message from avsc schema file. Using below code snippet

static byte[] fromJasonToAvro(String json, String schemastr) throws Exception {            InputStream input = new ByteArrayInputStream(json.getBytes());          DataInputStream din = new DataInputStream(input);            Schema schema = Schema.parse(schemastr);            Decoder decoder = DecoderFactory.get().jsonDecoder(schema, din);            DatumReader<Object> reader = new GenericDatumReader<Object>(schema);          Object datum = reader.read(null, decoder);            GenericDatumWriter<Object> w = new GenericDatumWriter<Object>(schema);          ByteArrayOutputStream outputStream = new ByteArrayOutputStream();            Encoder e = EncoderFactory.get().binaryEncoder(outputStream, null);            w.write(datum, e);          e.flush();            return outputStream.toByteArray();      }          public static void main(String[] args) throws Exception {            StringBuilder sb = new StringBuilder();          StringBuilder jsb = new StringBuilder();            ClassLoader classloader = Thread.currentThread().getContextClassLoader();          InputStream is = classloader.getResourceAsStream("RsvpAvroSchema.avsc");          InputStream js = classloader.getResourceAsStream("JsonMessage.dat");                    InputStreamReader isr = new InputStreamReader(is, StandardCharsets.UTF_8);          InputStreamReader jisr = new InputStreamReader(js, StandardCharsets.UTF_8);          BufferedReader br = new BufferedReader(isr);          BufferedReader jbr = new BufferedReader(jisr);          br.lines().forEach(line -> sb.append(line));          jbr.lines().forEach(line -> jsb.append(line));            System.out.println(sb);          System.out.println(jsb);                    System.out.println(new String(fromJasonToAvro(jsb.toString(), sb.toString()), StandardCharsets.UTF_8));  

But I've created avro classes (data structure) too from avsc using maven plugin. But now not sure how to use that main class of avro message data structure with string json message to produce avro message ?

Can anyone share how to do it ?

Update:

How to create Avro object from string Json ? Already have avro classes available in my project.

Cython memoryview shape incorrect?

Posted: 16 May 2021 08:31 AM PDT

Consider the following to create a linear array of size 4:

import numpy as np  cimport numpy as np  cdef np.float64_t [:] a = np.zeros(shape=(4),dtype=np.float64)  

a.shape should be (4,). However:

print(a.shape)  >>> [4, 0, 0, 0, 0, 0, 0, 0]  

What is going on? The original Python code gives the correct answer:

a = np.zeros(shape=(4),dtype=np.float64)  print(a.shape)  >>> (4,)  

OSError when loading txt file in numpy

Posted: 16 May 2021 08:31 AM PDT

I have a problem loading a txt file from a Google drive to numpy. It got OSError. I put all the .py file and txt file in the same folder but it didn't work. I read through some similar topics and it seems like it may be because the file is made from OS user (if I understand correctly).

Appreciate for any help.

Thanks a lot! Vince.

My code is just simple like this:

import numpy as np

data = np.loadtxt("weight_height_1.txt", delimiter=",") Error message

Traceback (most recent call last): File "", line 1, in File "C:\Users\Long Le\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 1065, in loadtxt fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) File "C:\Users\Long Le\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib_datasource.py", line 194, in open return ds.open(path, mode, encoding=encoding, newline=newline) File "C:\Users\Long Le\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib_datasource.py", line 531, in open raise IOError("%s not found." % path) OSError: weight_height_1.txt not found.

Trying to set a Navigation Link Items detail view in ForEach loop

Posted: 16 May 2021 08:31 AM PDT

Aim Trying to generate a navigation link, so that To Do list items that are fetched from the Core Data model will have a link to an "ItemDetailView", which will show more details about the individual To Do List item.

Attempts I have tried adding the Navigation view at the top of the section before the For Each loop runs.

But Xcode threw these three errors
#1 Cannot call value of non-function type 'NavigationLink<Text, ItemDetailView.Type>'

#2 Missing argument for parameter #1 in call / Insert '<#LocalizedStringKey#>, '

#3 Type 'ItemDetailView.Type' cannot conform to 'View'

import SwiftUI  import CoreData    struct ContentView: View {      //Three Variables are set      //Environment sets the context      @Environment(\.managedObjectContext) var context      //Fetch Request to query the To Do List Items      @FetchRequest(fetchRequest: ToDoListItem.getAllToDoListItems())      var items: FetchedResults<ToDoListItem>      // A State Entry to hold the entry in the text field.      @State var text: String = ""            var body: some View {          NavigationView {              // List in the navigation field for the the list of text.              List {                  Section(header: Text("New Item")) {                      HStack {                          TextField("Enter New Item...", text: $text)                          //Button to save a new item for each To do in the to do list and also initialise the text view.                          Button(action: {                              if !text.isEmpty {                                  let newItem = ToDoListItem(context: context)                                  newItem.name = text                                  newItem.createdAt = Date()                                                                    do {                                      try context.save()                                  }                                  catch {                                      print(error)                                  }                                  text = ""                              }                          }, label: {                              Text("Save")                          })                                                }                      Section {                          // For each is to showeach of the items from Core Data.                          NavigationLink (destination:ItemDetailView)(                              ForEach(items) { toDoListItem in                                  VStack (alignment: .leading){                                      Text(toDoListItem.name!)                                          .font(.headline)                                      Text("\(toDoListItem.createdAt!)")                                  }                              }.onDelete(perform: { indexSet in                                  guard let index = indexSet.first else {                                      return                                  }                                  let itemToDelete = items[index]                                  context.delete(itemToDelete)                                  do {                                      try context.save()                                  }                                  catch {                                      print(error)                                  }                              })                          )}                  }                                }          }          .navigationTitle("To Do List")      }        }      struct ContentView_Previews: PreviewProvider {      static var previews: some View {          ContentView()      }  }  

Check all nested boxes when parent is selected

Posted: 16 May 2021 08:30 AM PDT

I have the following array I am working with

const data = [    {      name: "Person",      id: "0",      familyMembers: [        {          id: "00",          name: "personOne"        },        {          id: "01",          name: "personTwo"        }      ]    },  ]  

I am dynamically mapping over this array with one map for the main objects and another for the nested array. I have a toggle and I am trying to implement functionality that when the parent is selected it automatically selects the nest kids of said parent as well.

 const [isSelected, setIsSelected] = useState({});      const handleCheck = (name: string, event: any): void => {        setIsSelected({ ...isSelected, [name]: event.target.isSelected });    };  

I can manipulate the data object to make life easier. Working code sandbox below https://codesandbox.io/s/divine-leftpad-0ozp0?file=/src/App.tsx

RecursionError: maximum recursion depth exceeded while calling a Python object while calling target encoder

Posted: 16 May 2021 08:31 AM PDT

I am trying to deploy a ML prediction app using flask, but I am getting Recursion error while running a encoder object. Below is the code for the app. Code fails at transform function

'''import numpy as np

import pandas as pd

from flask import Flask, request, jsonify, render_template

import pickle

import category_encoders

app = Flask(name)

model = pickle.load(open('LR.pkl', 'rb'))

enk = pickle.load(open('enc.pkl', 'rb'))

@app.route('/')

def home():

return render_template('index.html')  

@app.route('/predict',methods=['POST'])

def predict():

int_features = [x for x in request.form.values()]  features=pd.DataFrame({'area_type':int_features[1],'location':int_features[0],'size':int_features[2],'avg_sqft':int_features[3]},index=['1'])  features['location']=features['location'].astype('category')  features['area_type']=features['area_type'].astype('category')  features['size']=features['size'].astype('category')  features['avg_sqft']=features['avg_sqft'].astype('float')  **ff=enk.transform(features)**  area_type=pd.DataFrame({'area_type_Carpet Area':[1,0,0],'area_type_Plot Area':[0,1,0]},index=['Carpet Area','Plot Area','Built-up  Area'])  df=pd.merge(ff,area_type,left_on='area_type',right_index=True,how='left')  df.drop('area_type',axis=1,inplace=True)    prediction = model.predict(df)    output = round(prediction[0], 2)    return render_template('index.html', prediction_text='House price should be  {}'.format(output))'''  

Can anyone help me understand what mistake I am doing. why is there a recursion error

How to avoid Group By Key in below implementation (i.e. in DataFrames) for better performance since my source data volume is high

Posted: 16 May 2021 08:31 AM PDT

How to avoid Group By Key in below implementation (i.e. in DataFrames) for better performance since my source data volume is high. Will there be a lot of unnecessary data transferred over the network because of the usage of Group By Key & Aggregation? Please suggest, Thanks.

    from datetime import datetime      from pyspark.sql import SparkSession      from pyspark.sql import functions as dst      from pyspark.sql.types import *      #TDF      TDF = [('1','XXX')]      rdd = spark.sparkContext.parallelize(TDF)      df1 = rdd.toDF()      dfColumns1 = ['ID','Name']      T_df = rdd.toDF(dfColumns1)      TDF_df = T_df.select('ID','Name').withColumn('variable_data',dst.to_json(dst.struct('ID','Name')))      #.show(truncate=False)      #LDF      LDF = [('1','XXX','10'),('1','YYY','20')]      rdd = spark.sparkContext.parallelize(LDF)      df2 = rdd.toDF()      dfColumns2 = ['ID','Name','Dept']      L_df = rdd.toDF(dfColumns2)      LI_df = L_df.select('ID','Name','Dept').withColumn('variable_data',dst.to_json(dst.struct('ID','Name','Dept')))      #.show(truncate=False)      #ADF      ADF = [('1','XXX','NewYork'),('1','YYY','Chicago'),('1','ZZZ','Denver')]      rdd = spark.sparkContext.parallelize(ADF)      df3 = rdd.toDF()      dfColumns3 = ['ID','Name','City']      TE_df = rdd.toDF(dfColumns3)      TEDF_df = TE_df.select('ID','Name','City').withColumn('variable_data',dst.to_json(dst.struct('ID','Name','City')))      #.show(truncate=False)            #TDF_Json_Rec          TDF_Json_Rec=TDF_df.join(TDF_df,'ID',how='inner').groupBy('ID').agg(dst.concat(dst.lit('{'),dst.concat(dst.lit(f'"{"TDF"}":['), dst.concat_ws(',',dst.collect_list(TDF_df.variable_data)), dst.lit(']')),dst.lit('}')).alias('TDF_Json_Rec'))      #.show(truncate=False)            #LI_Json_Rec         LI_Json_Rec=TDF_df.join(LI_df,'ID',how='inner').groupBy('ID').agg(dst.concat(dst.lit('{'),dst.concat(dst.lit(f'"{"LIDF"}":['), dst.concat_ws(',',dst.collect_list(LI_df.variable_data)), dst.lit(']')),dst.lit('}')).alias('LIDF_Json_Rec'))      #.show(truncate=False)            #TEDF_Json_Rec          TEDF_Json_Rec=TDF_df.join(TEDF_df,'ID',how='inner').groupBy('ID').agg(dst.concat(dst.lit('{'),dst.concat(dst.lit(f'"{"TEDF"}":['), dst.concat_ws(',',dst.collect_list(TEDF_df.variable_data)), dst.lit(']')),dst.lit('}')).alias('TEDF_Json_Rec'))      #.show(truncate=False)            #T_Construct_Rec      T_Construct_Rec = TDF_df.join(LI_Json_Rec, 'ID', how='left') \                  .join(TEDF_Json_Rec, 'ID', how='left') \                  .join(TDF_Json_Rec, 'ID', how='inner') \                  .select(TDF_df["*"],                          LI_Json_Rec.LIDF_Json_Rec,                          TEDF_Json_Rec.TEDF_Json_Rec,                          TDF_Json_Rec.TDF_Json_Rec)      #.show(truncate=False)            #T_Final_rec      T_Final_rec= T_Construct_Rec.withColumn("variable_data",dst.concat(dst.lit('['), dst.col("TDF_Json_Rec"), dst.lit(','),                                                                         dst.col("LIDF_Json_Rec"), dst.lit(','),                                                                         dst.col("TEDF_Json_Rec"), dst.lit(']'))).drop(dst.col("TDF_Json_Rec")).drop(dst.col("LIDF_Json_Rec")).drop(dst.col("TEDF_Json_Rec")).show(truncate=False)                                                                                #Final Output as follows,      #|ID |Name|variable_data      #|1  |XXX |[{"TDF":[{"ID":"1","Name":"XXX"}]},{"LIDF":[{"ID":"1","Name":"XXX","Dept":"10"},      #{"ID":"1","Name":"YYY","Dept":"20"}]},{"TEDF":[{"ID":"1","Name":"XXX","City":"NewYork"},      #{"ID":"1","Name":"YYY","City":"Chicago"},{"ID":"1","Name":"ZZZ","City":"Denver"}]}]|        

How do I calculate the percentage of averages of sub-groups of aggregated groups?

Posted: 16 May 2021 08:30 AM PDT

Dear Community

I have a data set, originating from UN Comtrade with yearly data and I want to trace the changes in major export partners of a group of countries (SADC members) over the time from 1999-2018. Because the data fluctuates a lot, I need to calculate the percentages of total export SADC country (ReporterName) and its export partner (PartnerName) for aggregated four-year averages ( defined by cuts) of aggregated total trade (per four year period).

My desired final outcome is a graph for each country in the SADCNames vector that looks like this (except with the right data) (I have already designed the plot function - but it is running with wrong data atm) Graphical outcome

My initial input data only contains the following columns

> head(Total_comp, n = 20)      Year ReporterName PartnerISO3 TradeValue in 1000 USD   1: 2018       Angola         All               42096736   2: 2017       Angola         All               34904881   3: 2016       Angola         All               28057500   4: 2015       Angola         All               33924937   5: 2014       Angola         All               58672369   6: 2013       Angola         All               67712527  

Everything else has to be calculated by R I already added two additional columns "Total_Year" & "pct_by_partner_year"

# Data set with % per partner   total_trade <- SITC3 %>%    # filter for gross exports (= exports that include intermediate products), remove "All"    # select years according to "years" list    filter(TradeFlowName == "Export", PartnerISO3 != "All", Year %in% years, ReporterName %in% SADCNames) %>%     # select variables of interest    select(Year, ReporterName, PartnerName, PartnerISO3, `TradeValue in 1000 USD`) %>%    # create an extra column with total per year     group_by(Year, ReporterName) %>%    mutate(Total_Year = sum(`TradeValue in 1000 USD`)) %>%    ungroup() %>%    # create an extra column with % per partner of total per year    group_by(Year, ReporterName, PartnerName) %>%    mutate(pct_by_partner_year = (`TradeValue in 1000 USD`/Total_Year)*100) %>%     arrange(ReporterName,desc(pct_by_partner_year), desc(Year))  head(total_trade)  

further, I added the cuts and an additional column that sums up Total for the average "total_year_group" with the help of two amazing community members

# create vectors for coding 4 years average  year_group_break <- c(1999, 2003, 2007, 2011, 2015, 2019)  year_group_labels <- c("1999-2002", "2003-2006", "2007-2010", "2011-2014", "2015-2018")  years <- c(1999, 2000, 2001, 2002,2003, 2004,   2005,   2006,   2007,   2008,   2009,   2010,                2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019)    #  Data set with % per partner by four years avg    FourY_av <- total_trade %>%    # create year_group variable for average values with above predefined labels and cuts,     # chose right = FALSE to take cut before year_group_break    mutate(year_group = cut(Year, breaks = year_group_break,                            labels  = year_group_labels,                            include.lowest = TRUE, right = FALSE)) %>%    # add column with mean of total trade per four year period: "avg_year_group_total"    group_by(ReporterName, year_group) %>%    mutate(dup = !duplicated(paste0(ReporterName, year_group, Total_Year)),           total_year_group = sum(Total_Year * dup)/sum(dup)) %>%    arrange(ReporterName,PartnerName, desc(Year))  

I now need to add two more columns 1) mean of total trade per four-year period by partner "period_partner_avg" & 2) percentage of total trade per four year period by partner: "period_partner_pct" I tried the, but it doesn't give me desired output, as the plot prints aggregates with above 100% Are you able to trace my mistakes and help me to improve my code?

  # add column with mean of total trade per four year period: "period_partner_avg"    group_by(ReporterName, PartnerName, year_group) %>%    mutate(dup2 = !duplicated(paste0(ReporterName, PartnerName, year_group, `TradeValue in 1000 USD`)),           period_partner_avg = sum(`TradeValue in 1000 USD` * dup2)/sum(dup2)) %>%    # add column with percentage of total trade per four year period by partner: "period_partner_pct"    group_by(year_group, ReporterName, PartnerName) %>%    mutate(period_partner_pct = (period_partner_avg)/(total_year_group)*100) %>%      arrange(ReporterName,desc(period_partner_pct), desc(Year))  head(FourY_av, n = 30)  

here is a sample of my data

> dput(head(total_trade, n = 30))  structure(list(Year = c(2018L, 2017L, 2016L, 2012L, 2013L, 2014L,   2015L, 2010L, 2009L, 2011L, 2007L, 2011L, 2007L, 2009L, 2010L,   2011L, 2013L, 2012L, 2010L, 2012L, 2009L, 2018L, 2017L, 2011L,   2015L, 2014L, 2010L, 2009L, 2013L, 2016L), ReporterName = c("Angola",   "Angola", "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",   "Angola", "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",   "Angola", "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",   "Angola", "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",   "Angola"), PartnerName = c("China", "China", "China", "China",   "China", "China", "China", "China", "China", "China", "China",   "United States", "United States", "United States", "United States",   "India", "India", "India", "India", "United States", "India",   "India", "India", "Other Asia, nes", "India", "India", "Canada",   "France", "United States", "India"), PartnerISO3 = c("CHN", "CHN",   "CHN", "CHN", "CHN", "CHN", "CHN", "CHN", "CHN", "CHN", "CHN",   "USA", "USA", "USA", "USA", "IND", "IND", "IND", "IND", "USA",   "IND", "IND", "IND", "OAS", "IND", "IND", "CAN", "FRA", "USA",   "IND"), `TradeValue in 1000 USD` = c(24517058.342, 19487066.539,   13923091.96, 33710030.023, 31947235.081, 27527110.851, 14320565.527,   20963245.476, 15954060.922, 24360792.847, 13459326.563, 16475024.144,   10875646.624, 7708378.359, 9965785.888, 6842018.3, 6764232.765,   6932060.8, 5117824.926, 6594525.851, 3659557.185, 3768940.47,   2890061.159, 5386493.281, 2676339.583, 4507416.181, 4039116.578,   3030206.205, 5018390.939, 1948845.077), Total_Year = c(42096736.31,   34904881.111, 28057499.527, 70863076.416, 67712526.544, 58672369.19,   33924937.48, 52612114.76, 40639411.73, 66427390.221, 44177783.072,   66427390.221, 44177783.072, 40639411.73, 52612114.76, 66427390.221,   67712526.544, 70863076.416, 52612114.76, 70863076.416, 40639411.73,   42096736.31, 34904881.111, 66427390.221, 33924937.48, 58672369.19,   52612114.76, 40639411.73, 67712526.544, 28057499.527), pct_by_partner_year = c(58.2398078593471,   55.8290586265851, 49.6234240210952, 47.57065559094, 47.1806868116795,   46.9166512807048, 42.2125038121072, 39.8449018284624, 39.2576079299463,   36.6728133770619, 30.4662788104697, 24.8015526263919, 24.6179094280831,   18.9677409953985, 18.94199830868, 10.2999956452256, 9.9896328053932,   9.78233115269439, 9.72746476613973, 9.30601123254513, 9.00494625589896,   8.9530467213552, 8.27981951810522, 8.10884375116869, 7.88900372941822,   7.68234902259279, 7.67716066237821, 7.45632398700078, 7.41131840020623,   6.94589721056435)), row.names = c(NA, -30L), groups = structure(list(      Year = c(2007L, 2007L, 2009L, 2009L, 2009L, 2009L, 2010L,       2010L, 2010L, 2010L, 2011L, 2011L, 2011L, 2011L, 2012L, 2012L,       2012L, 2013L, 2013L, 2013L, 2014L, 2014L, 2015L, 2015L, 2016L,       2016L, 2017L, 2017L, 2018L, 2018L), ReporterName = c("Angola",       "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",       "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",       "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",       "Angola", "Angola", "Angola", "Angola", "Angola", "Angola",       "Angola", "Angola", "Angola", "Angola", "Angola"), PartnerName = c("China",       "United States", "China", "France", "India", "United States",       "Canada", "China", "India", "United States", "China", "India",       "Other Asia, nes", "United States", "China", "India", "United States",       "China", "India", "United States", "China", "India", "China",       "India", "China", "India", "China", "India", "China", "India"      ), .rows = structure(list(11L, 13L, 9L, 28L, 21L, 14L, 27L,           8L, 19L, 15L, 10L, 16L, 24L, 12L, 4L, 18L, 20L, 5L, 17L,           29L, 6L, 26L, 7L, 25L, 3L, 30L, 2L, 23L, 1L, 22L), ptype = integer(0), class = c("vctrs_list_of",       "vctrs_vctr", "list"))), row.names = c(NA, 30L), class = c("tbl_df",   "tbl", "data.frame"), .drop = TRUE), class = c("grouped_df",   "tbl_df", "tbl", "data.frame"))  

Loading 64 x 64 x 3 images in DCGAN

Posted: 16 May 2021 08:30 AM PDT

I'm trying to load a custom dataset in my DCGAN, however, when preparing the Data, it returns a

ValueError: Error when checking input: expected conv2d_1_input to have shape (64, 64, 3) but got array with shape (64, 64, 1)  

here is the code I used to prepare the data:

import os    import numpy as np  from keras.preprocessing.image import load_img, img_to_array      def set_data():      data = []      dir_root = os.path.join(os.getcwd(), 'SunsetImages64')      file_list = os.listdir(os.path.join(dir_root, dir_root))        for file_name in file_list:          data.append(os.path.join(dir_root, file_name))        data_list = []        for image_name in data:          image_loaded = load_img(image_name,                                  color_mode='rgb',                                  target_size=(64, 64),                                  interpolation='bicubic')          image = img_to_array(image_loaded)          data_list.append(image)        train_file = os.path.join(os.getcwd(), 'data', 'train')      train_data = np.array(data_list).astype('uint8')      np.savez(train_file, train_data=train_data)      set_data()  

and the one I used to import it into the DCGAN

def get_dataset():      file_path = os.path.join(os.getcwd(), 'data', 'train.npz')      train_data = np.load(file_path)['train_data'][:]        train_data_new = []        for data_index in range(len(train_data)):          train_data_new_image = []            for row in range(len(train_data[data_index])):              new_row = [x[0] for x in train_data[data_index][row]]              train_data_new_image.append(new_row)            train_data_new.append(train_data_new_image)        return train_data_new    

Any kind of help or guidance would be much appreciated

How to pass parameters loaded from configuration file to a procedural macro function?

Posted: 16 May 2021 08:30 AM PDT

Here is a problem I am trying to solve. I have multiple procedural macro functions that generate tables of pre-computed values. Currently my procedural macro functions take parameters in the form of literal integers. I would like to be able to pass these parameters from a configuration file. I could re-write my functions to load parameters from macro themselves. However, I want to keep configuration from a top level crate, like in this example:

top-level-crate/      config/          params.yaml      macro1-crate/      macro2-crate/         

Since the input into a macro function is syntax tokens not run-time values, I am not able to load a file from top-level-crate and pass params.

    use macro1_crate::gen_table1;      use macro2_crate::gen_table2;        const TABLE1: [f32;100] = gen_table1!(500, 123, 499);      const TABLE2: [f32;100] = gen_table2!(1, 3);        fn main() {         // use TABLE1 and TABLE2 to do further computation.                }    

I would like to be able to pass params to gen_table1 and gen_table2 from a configuration file like this:

      use macro1_crate::gen_table1;      use macro2_crate::gen_table2;           // Load values PARAM1, PARAM2, PARAM3, PARAM4, PARAM5        const TABLE1: [f32;100] = gen_table1!(PARAM1, PARAM2, PARAM3);      const TABLE2: [f32;100] = gen_table2!(PARAM4, PARAM5);        fn main() {         // use TABLE1 and TABLE2 to do further computation.                }  

The obvious problem is that PARAM1, PARAM2, PARAM3, PARAM4, PARAM5 are runtime values, and proc macros rely on build time information to generate tables.

One option I am considering is to create yet another proc macro specifically to load configuration into some sort of data-structure built using quote! tokens. Then feed this into macros. However, this feels hackish and the configuration file needs to be loaded several times. Also the params data structure need to be tightly coupled across macros. The code might look like this:

      use macro1_crate::gen_table1;      use macro2_crate::gen_table2;           const TABLE1: [f32;100] = gen_table1!(myparams!());      const TABLE2: [f32;100] = gen_table2!(myparams!());        fn main() {         // use TABLE1 and TABLE2 to do further computation.                }  

Any improvements or further suggestions?

multiprocessing with moviepy

Posted: 16 May 2021 08:30 AM PDT

Recently I made a script that take a 5 minutes video clip and cuts for 5 video, 1 min each video, it works well, but its taking too long for pc like my, and my pc with very good part performance:

Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz, 2904 Mhz, 8 Core(s), 16 Logical Processor(s)

Installed Physical Memory (RAM) 16.0 GB

So I search on the moviepy's docs "threads", I found something in the "write_videofile" function that i can set my threads to speed up, I tried it, but its didnt worked, I mean its worked but its only it changed maybe to more 2 or 3 it/s.

Also I found example code with multithreading but its seems like the code doesnt work because moviepy.multithreading doesnt exists in the moviepy library, Please help me speed up the rendering, Thank you

here is the code that i found:

from moviepy.multithreading import multithread_write_videofile    def concat_clips():      files = [          "myclip1.mp4",          "myclip2.mp4",          "myclip3.mp4",          "myclip4.mp4",      ]      multithread_write_videofile("output.mp4", get_final_clip, {"files": files})      def get_final_clip(files):      clips = [VideoFileClip(file) for file in files]      final = concatenate_videoclips(clips, method="compose")      return final  

this is my code:

from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip  from moviepy.editor import *  from numpy import array, true_divide  import cv2  import time      # ffmpeg_extract_subclip("full.mp4", start_seconds, end_seconds, targetname="cut.mp4")          def duration_clip(filename):      clip = VideoFileClip(filename)      duration = clip.duration      return duration      current_time = time.strftime("%Y_%m_%d_%H_%M_%S")      def main():      global duration      start = 0      cut_name_num = 1      end_seconds = start + 60      video_duration = duration_clip("video.mp4")                txt = input("Enter Your text please: ") [::-1]      txt_part = 1        while start < int(video_duration):          final_text = f"{str(txt_part)} {txt}"                try:              try:                  os.makedirs(f"result_{str(current_time)}/result_edit")              except FileExistsError:                  pass                          ffmpeg_extract_subclip("video.mp4", start, end_seconds, targetname=f"result_{str(current_time)}/cut_{str(cut_name_num)}.mp4")                clip = VideoFileClip(f"result_{str(current_time)}/cut_{str(cut_name_num)}.mp4")                clip = clip.subclip(0, 60)                clip = clip.volumex(2)                txt_clip = TextClip(final_text, font="font/VarelaRound-Regular.ttf", fontsize = 50, color = 'white')                txt_clip = txt_clip.set_pos(("center","top")).set_duration(60)                 video = CompositeVideoClip([clip, txt_clip])                            clip.write_videofile(f"result_{str(current_time)}/result_edit/cut_{str(cut_name_num)}.mp4")            except:              try:                  os.makedirs(f"result_{str(current_time)}/result_edit")              except FileExistsError:                  pass                            ffmpeg_extract_subclip("video.mp4", start, video_duration, targetname=f"result_{str(current_time)}/cut_{str(cut_name_num)}.mp4")                clip_duration = duration_clip(f"result_{str(current_time)}/cut_{str(cut_name_num)}.mp4")                clip = VideoFileClip(f"result_{str(current_time)}/cut_{str(cut_name_num)}.mp4")                clip = clip.subclip(0, clip_duration)                clip = clip.volumex(2)                txt_clip = TextClip(final_text, font="font/VarelaRound-Regular.ttf", fontsize = 50, color = 'white')                txt_clip = txt_clip.set_pos(("center","top")).set_duration(60)                 video = CompositeVideoClip([clip, txt_clip])                clip.write_videofile(f"result_{str(current_time)}/result_edit/cut_{str(cut_name_num)}.mp4")            start += 60          cut_name_num += 1          end_seconds = start + 60          txt_part += 1      if __name__ == "__main__":      main()  

Filter api output by value stored in Join Table

Posted: 16 May 2021 08:30 AM PDT

I have many to many relationship between two models (First is origin country with list of countries and second is destination country with list of countries).

I have created join table and I have set additional variable in it:

class BorderStatus(models.Model):      STATUS_CHOICES = [("OP", "OPEN"), ("SEMI", "CAUTION"), ("CLOSED", "CLOSED")]      origin_country = models.ForeignKey(OriginCountry, on_delete=models.CASCADE, default="0")      destination = models.ForeignKey(Country, on_delete=models.CASCADE, default="0")      status = models.CharField(max_length=6, choices=STATUS_CHOICES, default="CLOSED")      extra = 1      class Meta:          unique_together = [("destination", "origin_country")]          verbose_name_plural = "Border Statuses"        def __str__(self):          return (              f"{self.origin_country.origin_country.name} -> {self.destination.name}"              f" ({self.status})"          )  

Now I set the view that it lists a country and all countries related with a status, like this:

[      {          "name": "Germany",          "destinations": [          "New Zealand",          "Watahia",          "France"          ],          "dest_country": [              {                  "id": 1,                  "name": "New Zealand",                  "status": "SEMI"              },              {                  "id": 2,                  "name": "Watahia",                  "status": "CLOSED"              },              {                  "id": 3,                  "name": "France",                  "status": "OP"              }          ]      }  ]  

Here is my serializer:

class BorderStatusSerializer(serializers.HyperlinkedModelSerializer):      id = serializers.ReadOnlyField(source='destination.id')      name = serializers.ReadOnlyField(source='destination.name')        class Meta:          model = BorderStatus          fields = ('id', 'name', 'status')    class OriginCountrySerializer(serializers.ModelSerializer):      origin_country = serializers.StringRelatedField(read_only=True)      destinations = serializers.StringRelatedField(many=True, read_only=True)      dest_country = BorderStatusSerializer(source='borderstatus_set', many=True)        class Meta:          model = OriginCountry          fields = ('origin_country', 'destinations', 'dest_country')  

Now I want to add a filter that only shows coutries with for example Status="CLOSED"

So the result of the filtering for a given country and related countries with correct status would be: /?name=Germany&borderstatus__status=CLOSED

[      {          "name": "Germany",          "destinations": ["Watahia"],          "dest_country": [              {                  "id": 2,                  "name": "Watahia",                  "status": "CLOSED"              },          ]      }  ]  

However instead, that query is returning all of the countries from the realted table, regardless of status like seen in the first response above.

Here is the api view code:

class OriginCountryViewSet(viewsets.ModelViewSet):      queryset = OriginCountry.objects.filter(borderstatus__status='CLOSED')      serializer_class = OriginCountrySerializer      #use django filter backend instead of search filter to get query bu filter fields instead of `search=`      filter_backends = (DjangoFilterBackend,)      filter_fields=('origin_country','borderstatus__status',)  

Function is not defined node.js Uncaught Reference error

Posted: 16 May 2021 08:31 AM PDT

In my app.js file I defined a function like this:

function testfunc() {      console.log("Testing");  }  

And in my home.html file I have this:

<script type="text/javascript" src="./app.js"></script>    <div class="ProfileImage" onmouseover="testfunc()"> </div>  

but when I mouseover the div it produces this

Uncaught ReferenceError: testfunc is not defined  

Here is my file structure:

enter image description here

No comments:

Post a Comment