Wednesday, February 9, 2022

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


ERROR !The requested handler 'docker status' was not found in either the main handlers list nor in the listening handlers list

Posted: 09 Feb 2022 09:18 AM PST

I am using vagrant to create a simulation of a prod cluster in which there a a master and two nodes, my vagrantfile look like this :

  IMAGE_NAME = "bento/ubuntu-16.04"  N = 2    Vagrant.configure("2") do |config|      config.ssh.insert_key = false        config.vm.provider "virtualbox" do |v|          v.memory = 1024          v.cpus = 2      end              config.vm.define "k8s-master" do |master|          master.vm.box = IMAGE_NAME          master.vm.network "private_network", ip: "192.168.50.10"          master.vm.hostname = "k8s-master"          master.vm.provision "ansible" do |ansible|              ansible.playbook = "kubernetes-setup/master-playbook.yml"              ansible.extra_vars = {                  node_ip: "192.168.50.10",              }          end      end        (1..N).each do |i|          config.vm.define "node-#{i}" do |node|              node.vm.box = IMAGE_NAME              node.vm.network "private_network", ip: "192.168.50.#{i + 10}"              node.vm.hostname = "node-#{i}"              node.vm.provision "ansible" do |ansible|                  ansible.playbook = "kubernetes-setup/node-playbook.yml"                  ansible.extra_vars = {                      node_ip: "192.168.50.#{i + 10}",                  }              end          end      end  end      

and my master playbook look like this :

  ---  - hosts: all    become: true    tasks:    - name: Install packages that allow apt to be used over HTTPS      apt:        name: "{{ packages }}"        state: present        update_cache: yes      vars:        packages:        - apt-transport-https        - ca-certificates        - curl        - gnupg-agent        - software-properties-common      - name: Add an apt signing key for Docker      apt_key:        url: https://download.docker.com/linux/ubuntu/gpg        state: present      - name: Add apt repository for stable version      apt_repository:        repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable        state: present      - name: Install docker and its dependecies      apt:         name: "{{ packages }}"        state: present        update_cache: yes      vars:        packages:        - docker-ce         - docker-ce-cli         - containerd.io      notify:        - docker status      - name: Add vagrant user to docker group      user:        name: vagrant        group: docker            - name: Remove swapfile from /etc/fstab      mount:         name: "{{ item }}"         fstype: swap         state: absent      with_items:          - swap          - none       - name: Disable swap       command: swapoff -a       when: ansible_swaptotal_mb > 0           - name: Add an apt signing key for Kubernetes      apt_key:        url: https://packages.cloud.google.com/apt/doc/apt-key.gpg        state: present      - name: Adding apt repository for Kubernetes      apt_repository:        repo: deb https://apt.kubernetes.io/ kubernetes-xenial main        state: present        filename: kubernetes.list      - name: Install Kubernetes binaries      apt:         name: "{{ packages }}"        state: present        update_cache: yes      vars:        packages:          - kubelet           - kubeadm           - kubectl      - name: Configure node ip      lineinfile:        path: /etc/default/kubelet        line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}      - name: Restart kubelet      service:        name: kubelet        daemon_reload: yes        state: restarted    - name: Initialize the Kubernetes cluster using kubeadm      command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16      - name: Setup kubeconfig for vagrant user      command: "{{ item }}"      with_items:       - mkdir -p /home/vagrant/.kube       - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config       - chown vagrant:vagrant /home/vagrant/.kube/config    - name: Install calico pod network      become: false      command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml      - name: Generate join command      command: kubeadm token create --print-join-command      register: join_command      - name: Copy join command to local file      local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"      handlers:      - name: docker status        service: name=docker state=started    

while the one used for the nodes is here below :

---  - hosts: all    become: true    tasks:    - name: Install packages that allow apt to be used over HTTPS      apt:        name: "{{ packages }}"        state: present        update_cache: yes      vars:        packages:        - apt-transport-https        - ca-certificates        - curl        - gnupg-agent        - software-properties-common      - name: Add an apt signing key for Docker      apt_key:        url: https://download.docker.com/linux/ubuntu/gpg        state: present      - name: Add apt repository for stable version      apt_repository:        repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable        state: present      - name: Install docker and its dependecies      apt:         name: "{{ packages }}"        state: present        update_cache: yes      vars:        packages:        - docker-ce         - docker-ce-cli         - containerd.io      notify:        - docker status      - name: Add vagrant user to docker group      user:        name: vagrant        group: docker    - name: Remove swapfile from /etc/fstab      mount:        name: "{{ item }}"        fstype: swap        state: absent      with_items:        - swap        - none      - name: Disable swap      command: swapoff -a      when: ansible_swaptotal_mb > 0      - name: Add an apt signing key for Kubernetes      apt_key:        url: https://packages.cloud.google.com/apt/doc/apt-key.gpg        state: present      - name: Adding apt repository for Kubernetes      apt_repository:        repo: deb https://apt.kubernetes.io/ kubernetes-xenial main        state: present        filename: kubernetes.list      - name: Install Kubernetes binaries      apt:         name: "{{ packages }}"        state: present        update_cache: yes      vars:        packages:          - kubelet           - kubeadm           - kubectl      - name: Configure node ip      lineinfile:        path: /etc/default/kubelet        line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}      - name: Restart kubelet      service:        name: kubelet        daemon_reload: yes        state: restarted      - name: Initialize the Kubernetes cluster using kubeadm      command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16      - name: Copy the join command to server location      copy: src=join-command dest=/tmp/join-command.sh mode=0777      - name: Join the node to cluster      command: sh /tmp/join-command.sh    

but when I launch my vagrant everything goes well until the installation of docker task on the node where I am facing this issue :

Vagrant has automatically selected the compatibility mode '2.0'  according to the Ansible version installed (2.9.6).    Alternatively, the compatibility mode can be specified in your Vagrantfile:  https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode        node-1: Running ansible-playbook...    PLAY [all] *********************************************************************    TASK [Gathering Facts] *********************************************************  [DEPRECATION WARNING]: Distribution Ubuntu 16.04 on host node-1 should use   /usr/bin/python3, but is using /usr/bin/python for backward compatibility with   prior Ansible releases. A future Ansible release will default to using the   discovered platform python for this host. See https://docs.ansible.com/ansible/  2.9/reference_appendices/interpreter_discovery.html for more information. This   feature will be removed in version 2.12. Deprecation warnings can be disabled   by setting deprecation_warnings=False in ansible.cfg.  ok: [node-1]    TASK [Install packages that allow apt to be used over HTTPS] *******************  changed: [node-1]  [WARNING]: Updating cache and auto-installing missing dependency: python-apt    TASK [Add an apt signing key for Docker] ***************************************  changed: [node-1]    TASK [Add apt repository for stable version] ***********************************  changed: [node-1]    TASK [Install docker and its dependecies] **************************************  *****ERROR! The requested handler 'docker status' was not found in either the main handlers list nor in the listening handlers list  Ansible failed to complete successfully. Any error output should be  visible above. Please fix these errors and try again.*****    

Anyone have a clue what could be the reason, I have tried to change the syntax but still I don't think it is a typo problem ?

Change font size in WPF compatible with Ease of Access functions

Posted: 09 Feb 2022 09:18 AM PST

I am working on a simple WPF app that needs to be accessible for people with impaired vision.

I am testing my app with the Windows 10 -> Settings -> Ease of Access -> Display -> Make text Bigger slider setting.

I have some section titles (TextBlock) that I want to have a bigger font than the rest of the labels and text in the window. However, if I use the FontSize attribute to make it bigger it is no longer affected by changes to the system text size. Thus, while all other text that hasn't FontSize set gets bigger, the ones that do remain at their fixed size.

Is there a simple way to make some text relatively bigger than the rest while not setting a specific font size value?

I want to remove item(s) from local storage

Posted: 09 Feb 2022 09:18 AM PST

I am trying to remove items from my local storage. I m pretty new at this, and I m confused about what to do. Tried different things but nothing worked. So this is my HTML: Inside ul, I am creating a ToDo list. My JS file creates elements inside UL. After each input, the item is saved to local storage. I have two problems: My items are not adding up to my To-DO list, and can not delete items from local storage after I press the delete button on the span.

<div id="myDiv" class="header">          <h2>My To Do List</h2>          <input type="text" name="text" id="myInput" placeholder="Title...">          <span onclick="newElement()" class="addBtn">Add</span>  </div>         <ul id="myUl"></ul>    <script>    var myNodelist = document.getElementsByTagName("Li");    var i;  for (i = 0; i < myNodelist.length; i++) {    var span = document.createElement("SPAN");      var txt = document.createTextNode("\u00D7");    span.className = "close";    span.addEventListener("click", removeFromLocalStorage);    span.appendChild(txt);    myNodelist[i].appendChild(span);    console.log(span);  }    var close = document.getElementsByClassName("close");  var i;  for (i = 0; i < close.length; i++) {    close[i].onclick = function () {      var div = this.parentElement;      div.style.display = "none";    };  }    var list = document.querySelector("ul");  list.addEventListener(    "click",    function (ev) {      if (ev.target.tagName === "LI") {        ev.target.classList.toggle("checked");      }    },    false  );    function newElement() {      var li = document.createElement("li");    var inputValue = document.getElementById("myInput").value;    var t = document.createTextNode(inputValue);    li.appendChild(t);    if (inputValue === "") {      alert("You must write something");    } else document.getElementById("myUl").appendChild(li);    inputValue.value = "";    saveToLocalStorage();       var span = document.createElement("span");    span.classList = "close";      var text = document.createTextNode("\u00D7");    span.appendChild(text);    li.appendChild(span);        for (i = 0; i < close.length; i++) {      close[i].onclick = function () {        var div = this.parentElement;        div.style.display = "none";        removeItem();      };    }  }    function saveToLocalStorage() {    var inputValue = document.getElementById("myInput").value;    window.localStorage.setItem("myInput", JSON.stringify(inputValue));  }    

Not sure even what I suppose to do. If any explanation about my code would be appreciated.

How can I identify the column, where two (or more) rows are different (same ID)?

Posted: 09 Feb 2022 09:18 AM PST

I have a dataframe with duplicate IDs, but the rows are not identical. Is there a function, which identifies the column (or columns), where a difference appears?

My real application is a dataframe with hundreds of columns. I need some way to check, whether changes were made in important columns or in some irrelevant ones. So first of all I need to identify the changed columns.

Example:

ID <- c(1,2,2,4,5,5,5,6,6,7)  Info1 <- c(10,20,20,40,50,50,50,65,60,70)  Info2 <- c('A','B','A','D', 'E','E','F', 'Z','A','B')  Info3 <- c(999,998,997,995,995,995,995,946,800,805)    df <- data.frame(ID, Info1, Info2, Info3)  
ID Info1 Info2 Info3  1   1    10     A   999  2   2    20     B   998  3   2    20     A   997  4   4    40     D   995  5   5    50     E   995  6   5    50     E   995  7   5    50     F   995  8   6    60     Z   946  9   6    60     A   800  10  7    70     B   805  

My goal would be an additional column, which contains the changed column, i.e. desired output:

   ID Info1 Info2 Info3            col_diff  1   1    10     A   999                <NA>  2   2    20     B   998        Info2; Info3  3   2    20     A   997        Info2; Info3  4   4    40     D   995                <NA>  5   5    50     H   995               Info2  6   5    50     E   995               Info2  7   5    50     F   995               Info2  8   6    65     Z   946 Info1; Info2; Info3  9   6    60     A   800 Info1; Info2; Info3  10  7    70     B   805                <NA>  

I hope my problem became clear. I hope there is some function within dplyr, which I do not know yet. Of course my solution with the additional column is not really elegant. So I am open to any ideas, which may solve my problem.

Thanks a lot!

Number of subsets with a given sum . Recursion not tracing few

Posted: 09 Feb 2022 09:18 AM PST

I have written this code in python and it is not yielding the right answer for the input wt[]=[2,3,5,6,8,10] in this order . It is giving right answer from all the other combinations of the above wt[] . I have also tried tracing the recursive tree debugging accordingly but still cannot figure out why the code is not tracing some branch of the tree. Kindly please help me figure out the problem. Thank you!

n=6      #int(input())  m=10     #int(input())  wt=[2,3,5,6,8,10]  dp_table=[[-1 for i in range(n+1)]for j in range (m+1)]  total=[0]  def SS(m,n):      a=0      b=0      if m==0:          print(n-1)          total[0]=total[0]+1           return 0;      if n==0:          return 0;      else:          if wt[n-1]>m:              return (SS(m,n-1));          else:              if dp_table[m-wt[n-1]][n-1]==-1:                  a=SS(m-wt[n-1],n-1) + wt[n-1]              if dp_table[m][n-1]==-1:                  b=SS(m,n-1)              dp_table[m][n]=max(a,b)                           return dp_table[m][n];  if m==0 or n==0:      print("Not Possible!")            if SS(m,n)==m:      print("Possible and the no of subsets with equal sum are: ",total[0])  else:      print("False")  

Fade2D library & C++ Builder

Posted: 09 Feb 2022 09:17 AM PST

I wish to use Fade2D library in my C++Builder projects.

Libraries are in MS Visual Studio's and using them raised an exception:

[ilink32 Error] Error contains invalid OMF record, type 0x21 (possibly COFF)  

I tried the two tools from Embarcadero to build the .lib file for C++ Builder:

  • COFF2OMF to convert the .lib from VS format and
  • IMPLIB to produces the .lib file starting form DLL

In both cases I have the error:

[ilink32 Error] Error: Unresolved external 'GEOM_FADE2D::setLic(const char *, const char *, const char *, const char *, const char *)' referenced from MAIN.OBJ  

Any suggestions? Thanks in advance.

Python `remove` method not removing [duplicate]

Posted: 09 Feb 2022 09:18 AM PST

I am learning Python and took on a few CodeWars exercises to get more familiar with its methods.

I have this function that creates the sum of all positive integers, and although I know there might be a "one liner" way to do this, this is approach I am taking as a beginner:

def positive_sum(arr):      for x in arr:          if x <= 0:              arr.remove(x)            if len(arr) == 0:          return 0            print(arr)            return sum(arr)  

For some reason this method is not removing -2, and -4 from the array. Why is that?

@test.it("returns 0 when all elements are negative")      def negative_case():          test.assert_equals(positive_sum([-1,-2,-3,-4,-5]),0)  

Python error message

UIDatePicker Month/Year Blank On Load

Posted: 09 Feb 2022 09:19 AM PST

I have an almost working datepicker with the following settings:

  1. Preferred Style: Automatic
  2. Mode: Date
  3. Locale: Default
  4. Date: Current Date

My problem is the current month and year name do not appear on load. Instead, the right arrow icon is in its place, and the month and year name only show after I've selected a day from the calendar.

I've tried .setDate on the datePicker in viewDidLayoutSubviews() as a try, but still seeing the same result. Any guidance would be appreciated as I could not find any existing solution online.

I created the datePicker through storyboard, but here is the remaining code as it pertains to the datePicker onLoad.

override func viewDidLoad() {            calendarDatePicker.preferredDatePickerStyle = .inline  calendarDatePicker.overrideUserInterfaceStyle = .dark  calendarDatePicker.date = //current date    calendarDatePickerContainerView.layer.cornerRadius = 8  calendarDatePickerContainerView.layer.masksToBounds = true  calendarDatePickerContainerViewBottomConstraint.constant -= self.view.bounds.size.height    self.view.layoutIfNeeded()  }    override func viewDidLayoutSubviews() {  super.viewDidLayoutSubviews()  guard let current = currentDate else { return }  calendarDatePicker.setDate(current, animated: true)  }    override func viewWillAppear(_ animated: Bool) {  super.viewWillAppear(animated)    DispatchQueue.main.async {      UIView.animate(withDuration: 0.4, delay: 0, usingSpringWithDamping: 0.8, initialSpringVelocity: 0.5, options: .curveEaseIn, animations: {          self.calendarDatePickerContainerViewBottomConstraint.constant = 0          self.view.layoutIfNeeded()      }, completion: nil)  }  }  

Iterate through array of objects using javascript

Posted: 09 Feb 2022 09:18 AM PST

I am trying to iterate through the array of objects but somehow not getting it right. Can somone please let me know where i am going wrong.

Here is the data

const response = {      "pass": 1,      "fail": 2,      "projects_all": [          {              "projects": [                  {                      "name": "Project1",                      "current": null,                      "previous": {                          "environment": "qa4nc",                          "status": "UNKNOWN",                      }                  }              ]          },          {              "projects": [                  {                      "name": "Project2",                      "current": null,                      "previous": {                          "environment": "qa4nc",                          "status": "FAIL",                      }                  },                  {                      "name": "Project3",                      "status": "LIVE",                      "current": null,                      "previous": {                          "environment": "qa4nc",                          "status": "UNKNOWN",                      }                  }              ]          }      ]  }  

And here is the code i tried

if(response) {    response?.projects_all?.forEach((projects) => {       projects.forEach(project) => {         if(project.previous !== null) {           //do something here         }      });   });  }  

I am trying to iterate through this array of objects but it says projects not iterable. Any help is appreciated to make me understand where i am going wrong.

Sum is not computed over groups (always gives the absolute total)

Posted: 09 Feb 2022 09:19 AM PST

I'm creating some summary tables and I'm having a hard time with simple sums...

While the count of records is correct, the variables with sums always compute the same value for all groups.

This is the code:

SummarybyCallContext <- PSTNRecords %>%                               group_by (PSTNRecords$destinationContext) %>%                                   summarize(                                    Calls = n(),                                     Minutes = sum(PSTNRecords$durationMinutes),                                    Charges = sum(PSTNRecords$charge),                                     Fees = sum(PSTNRecords$connectionCharge)                                  )                                      SummarybyCallContext  

And this is the result:

Result

Minutes and Charges should be different for each group (Fees is always zero, but I need to display it anyway in the table).

Setting na.rm to TRUE or FALSE doesn't seem to change the result.

What am I doing wrong?

Thanks in advance!

~Alienvolm

Reading text from file from certain point by bytes

Posted: 09 Feb 2022 09:18 AM PST

I have to read from file what have been overwritten. For example there is file on the disc and size is 75 bytes, then someone add few lines of text and size is 200 bytes. I need to read only this new 125 bytes. I created method for this using RandomAccessFile and it is working fine. I return byte[] bytes then I convert this bytes to String. But what when someone will add 1 GB of text? Is there any way to not read all this bytes to memory, just return new added text?

This is my method

private byte[] readFromFile(String filePath, int position, int size) throws IOException {      RandomAccessFile file = new RandomAccessFile(filePath, "r");      file.seek(position);      byte[] bytes = new byte[size];      file.read(bytes);      file.close();      return bytes;  }  

or any better way to do this. I have only size of file in bytes before modified and using this size I have to get new content after modified.

How can I keep temporary file even use ActiveJob Async

Posted: 09 Feb 2022 09:18 AM PST

I have a simple upload file process and throw to background job (sidekiq) on Rails 5.2.6. I'm using ActionDispatch::Http::UploadedFile to store the file as temporary file. The temporary file will be unlink once the HTTP request is completed.

https://api.rubyonrails.org/v5.2.6/classes/ActionDispatch/Http/UploadedFile.html

Uploaded files are temporary files whose lifespan is one request. When the object is finalized Ruby unlinks the file, so there is no need to clean them with a separate maintenance task.

So I want to keep the temporary file and read it on background job as asynchronous even the HTTP request is completed in controller stage

Controller

def upload    UploadFileJob.perform_later(params[:file_uploads].path)    redirect_to root_path  end  

Job

class UploadFileJob < ActiveJob::Base    def perform(file_path)      sheet = Roo::Spreadsheet.open(file_path)    end  end  

The issue is when background job is running, that can't read the file because the file was removed (the HTTP request is completed).

2022-02-09T07:17:33.756Z pid=24902 tid=gqdvonm4i class=UploadFileJob jid=e8262af8c41e83f22ad06149 INFO: Adding dead ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper job e8262af8c41e83f22ad06149  2022-02-09T07:17:33.757Z pid=24902 tid=gqdvonm4i class=UploadFileJob jid=e8262af8c41e83f22ad06149 elapsed=0.009 INFO: fail  2022-02-09T07:17:33.757Z pid=24902 tid=gqdvonm4i WARN: {"context":"Job raised exception","job":{"retry":true,"queue":"default","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"UploadFileJob","args":[{"job_class":"UploadFileJob","job_id":"0143f401-ddc8-4db8-9e77-32048eb7de4a","provider_job_id":null,"queue_name":"default","priority":null,"arguments":["/tmp/RackMultipart20220209-25523-ltyki7.xlsx"],"executions":0,"locale":"id"}],"jid":"e8262af8c41e83f22ad06149","created_at":1644391053.7475002,"enqueued_at":1644391053.7475615},"jobstr":"{\"retry\":true,\"queue\":\"default\",\"class\":\"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper\",\"wrapped\":\"UploadFileJob\",\"args\":[{\"job_class\":\"UploadFileJob\",\"job_id\":\"0143f401-ddc8-4db8-9e77-32048eb7de4a\",\"provider_job_id\":null,\"queue_name\":\"default\",\"priority\":null,\"arguments\":[\"/tmp/RackMultipart20220209-25523-ltyki7.xlsx\"],\"executions\":0,\"locale\":\"id\"}],\"jid\":\"e8262af8c41e83f22ad06149\",\"created_at\":1644391053.7475002,\"enqueued_at\":1644391053.7475615}"}  2022-02-09T07:17:33.758Z pid=24902 tid=gqdvonm4i WARN: IOError: file /tmp/RackMultipart20220209-25523-ltyki7.xlsx does not exist  2022-02-09T07:17:33.758Z pid=24902 tid=gqdvonm4i WARN: /var/www/html/application/shared/bundle/ruby/2.6.0/gems/roo-2.7.1/lib/roo/base.rb:398:in `local_filename'  

Is there anyway to do that only use ActionDispatch without use ActiveStorage or something else, I mean can I keep the file until the background job is done while the request is completed in controller side?

Running Multiple Server(CMD commands) from one BAT file [closed]

Posted: 09 Feb 2022 09:18 AM PST

I have 5-6 python(Flask, DJango) servers, All in one windows machine. To start all of them, I need to run 6 CMDs and run one by one. The Best I could do was to write a bat file like below.

start "srv1" python srv1.py  start "srv2" python srv2.py  start "srv3" python srv3.py  start "srv4" python srv4.py  start "srv5" python srv5.py  start "srv6" python srv6.py  

but this also will open 6 cmd windows. (/B Flag I tried but CMD looks dirty)

Do you guys know:

  1. Any tool, task manager kind of thing where I can register my cmd commands as a task, and I can start and stop at will from some kind of UI.
  2. Or any way of writing bat file which will just run all those commands as a service and I can stop them at will.
  3. Or any Other Solution :)

Thanks.

inja2.exceptions.UndefinedError: 'post' is undefined

Posted: 09 Feb 2022 09:18 AM PST

I'm new to programming. I try to add the comment section on my flask blog. After I ran it, post.html won't work. It said 'post' is undefined.

routes.py(create-comment):

@app.route('/create-comment/<post_id>', methods=['POST'])  @login_required  def create_comment(post_id):    text = request.form.get('text')    if not text:      flash('Comment cannot be empty.', category='error')    else:      post = Post.query.filter_by(id=post_id)      if post:        comment = Comment(text=text, author=current_user.id, post_id=post_id)        db.session.add(comment)        db.session.commit()      else:        flash('Post does not exist.', category='error')      return redirect(url_for('home'))  

individual post file(post.html):

{% extends "layout.html" %} {% block content %} {% for message in get_flashed_messages() %}  <div class="alert alert-warning alert-dismissible fade show" role="alert">      {{ message }}      <button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>  </div>  {% endfor %}  <div class="shadow p-3 mb-5 bg-body rounded">      <h2>{{ post.title }}</h2> By: {{ post.author }}<br/> {{ post.date }} <br/><br/> {{ post.content }}<br/>      <br/> {% for comment in post.comment %} {{ comment.text }}      <br/>      <form class="input-group mb-3" method="POST" action="/create-comment/{{ post.id }}">          <input type="text" id="text" class="form-control" placeholder="comment here!" />          <button type="submit" class="btn btn-primary">Comment</button>      </form>  </div>  {% endfor %}  <a href="{{ url_for('posts') }}" class="btn btn-outline-secondary btn-sm">Back To Blog List</a> {% endblock %}  

the error:

"C:\coding_stuff\CMT120\flask_openshift\blog\templates\post.html", line 8, in block 'content'          <h2>{{ post.title }}</h2> By: {{ post.author }}<br/> {{ post.date }} <br/><br/> {{ post.content }}<br/>        File "C:\coding_stuff\CMT120\flask_openshift\venv\lib\site-packages\jinja2\environment.py", line 474, in getattr          return getattr(obj, attribute)      jinja2.exceptions.UndefinedError: 'post' is undefined      127.0.0.1 - - [09/Feb/2022 16:47:26] "GET /posts/1 HTTP/1.1" 500 -  

Why I cant see the results from mongodb on my created page (jinja, python)?

Posted: 09 Feb 2022 09:19 AM PST

hope you all doing well. I have an issue been stuck on this for a long time now and i cant figure it out, hope you can help. Am building a recipe website for my project ( am rookie coder ) and i cant fetch data from my monogodb database. I have created a page add_recipes works fine and post everything to database and I want these recipes that has been posted in the database to show on a page called Recipes. If any one could help me out would be awesome ! Here is my codes: Python.

@app.route("/")  @app.route("/get_recipes")  def get_recipes():  recipes = list(mongo.db.recipes.find())  return render_template("recipes.html", recipes=recipes)  

Here is html with jinja :

{% extends "base.html" %} {% block content %}

{% for recipe in recipes %} {{ recipe.title }}
{{ recipe.description }}
{{ recipe.ingredients }}
{{ recipe.image }}
{{recipe.created_by}}
{% endfor %}

{% endblock %}

And my apologies if I was to unclear its my first time posting something here :)

Column attribute not working for seed data in old migration

Posted: 09 Feb 2022 09:17 AM PST

I have a database table in MySQL called thing_type which has 'name' and 'description' columns.

I have an old 'initial migration' generated by a previous version of Entity Framework that seeds this table like this

migrationBuilder.InsertData(          table: "thing_type",          columns: new[] { "name", "description" },          values: new object[] { "SomeName", "Some description" });  

The corresponding class is called ThingType and it has properties: Name and Description (the first letter of each is uppercase).

Now, after upgrading to EFCore 6, if I wipe the database, the seeding fails with this error

There is no property mapped to the column 'thing_type.name' which is used in a data operation. Either add a property mapped to this column, or specify the column types in the data operation.'  

I've tried adding [Column("name")] to the property but with no luck. With an existing database, everything seems to run fine even without the column attribute so it just seems to be the seeding operation that has a problem.

If I manually edit the migration to remove the insert operations, I can get past the error but I can't really do that here.

Thanks

Can't access the website via Postman but on chrome without any issue

Posted: 09 Feb 2022 09:18 AM PST

Problem 1: (resolved - Thanks @Ranjith Thangaraju)

I tried to access this website via postman, but I can't do this because I got an error: https://i.stack.imgur.com/Dmfj8.png

Then when I try to access it on chrome - there's no restriction at all - I can access it: https://finance.vietstock.vn/

Could someone please help me to explain or help with this?

I'm sorry if someone else had the same issue and it is fixed, if you see some other similar, please point me the direction on that

Problem 2:

When I access this page [https://finance.vietstock.vn/CEO/phan-tich-ky-thuat.htm], there is one of the APIs that I've tried to call from the postman but I couldn't, could you please point me a solution for this?

Chrome: https://i.stack.imgur.com/RTfsM.png

Postman: https://i.stack.imgur.com/2P2Qe.png

How to copy a file to a new file with a new name in same directory but across multiple directories in bash?

Posted: 09 Feb 2022 09:18 AM PST

I am trying to copy an existing file (that is found in across directories) to a new file with a new name in Bash. For example, 'Preview.json' to 'Performance.json'. I have tried using

find * -type f -name 'Preview.json' -exec cp {} {}"QA" \;  

But ended up with 'Preview.jsonQA'. (I am new to Bash.) I have tried moving the "QA" in front of the {} but I got errors because of an invalid path.

Paypal JavaScript SDK with IPN

Posted: 09 Feb 2022 09:19 AM PST

I am using PayPal JavaScript SDK. Now I want to add Instant Payment Notification (IPN) as I have done with just the quick checkout button. There I have used it in a form with POST like this:

<form action="https://www.paypal.com/cgi-bin/webscr" method="post">  <input type="hidden" name="notify_url" value="<?php echo $HTTP_DIR."method/paypal/ipn.php"; ?>">  

How I can use "notify_url" with the JS SDK? I did not find it as a parameter.

Is it enough to set up in the PayPal account where I can set an IPN URL?

How to cancel valueChanges in Angular and set the old value again?

Posted: 09 Feb 2022 09:18 AM PST

I subscribed to valueChanges of a mat-selection-list to be notified when the user selects a new list item. In some cases I ask the user if he/she really wants to select the new item via a confirmation dialog. If the user clicks "No" in this dialog, I need to cancel the selection of the new item. This is how I attempted to do this:

this.createForm.controls.selectionList.valueChanges.subscribe((value: number[]) => {    const prevValue = this.createForm.controls.selectionList.value;      if (<condition>) {      const confirmDialogRef = this.dialog.open(ConfirmationDialogComponent, {        data: "Do you really want to do this?",      });        confirmDialogRef.afterClosed().subscribe((confirmResult) => {        if (confirmResult === true) {          // User clicked yes, continue...        } else {          // User clicked no, set the previous value again...          this.createForm.controls.selectionList.setValue(prevValue, { emitEvent: false });        }      });    } else {      // No user confirmation required, just continue...    }  });  

Unfortunately, this does not seem to work because prevValue seems to already be the new value when I try to set it as current value. What can I do instead to set the previous value again and basically just pretend that the user never even selected a new list item to begin with? Do I really have to cache the previous value manually, or is there a better way to do this?

MonetDB Crashes When Connecting. Will Not Restart. Large Logs

Posted: 09 Feb 2022 09:17 AM PST

I recently migrated an instance of MonetDB to a machine with a more memory and a larger hard drive by transferring a machine image from one location to another. The database worked briefly but now when attempting to connect with mclient I get the following error:

$ mclient warehouse  user(monetdb):monetdb  password:  monetdbd: internal error while starting mserver 'database 'warehouse' has crashed after starting, manual intervention needed, check monetdbd's logfile (merovingian.log) for details'  

Upon checking merovingian.log I can see that the database started, but crashed when mclient attempted to connect:

2022-02-09 14:11:29 MSG merovingian[4368]: database 'warehouse' has crashed after start on 2022-02-09 14:03:23, attempting restart, up min/avg/max: 2m/6w/22w, crash average: 1.00 1.00 0.97 (135-6=129)  

The database then attempts to restart and appears normal until attempting to read through something referred to the write-ahead log.

2022-02-09 14:11:29 MSG warehouse[4403]: arguments:  2022-02-09 14:11:29 MSG warehouse[4403]: /usr/bin/mserver5 --dbpath=/home/db_user/monetDBDatabase/warehouse/warehouse --set merovingian_uri=mapi:monetdb://dev:50000/warehouse --set mapi_listenaddr=none --set mapi_usock=/home/db_user/monetDBDatabase/warehouse/warehouse/.mapi.sock --set monet_vault_key=/home/db_user/monetDBDatabase/warehouse/warehouse/.vaultkey --set  2022-02-09 14:11:29 MSG warehouse[4403]: gdk_nr_threads=36 --set max_clients=64 --set sql_optimizer=default_pipe  2022-02-09 14:11:40 MSG warehouse[4403]: # MonetDB 5 server v11.39.17 (Oct2020-SP5)  2022-02-09 14:11:40 MSG warehouse[4403]: # Serving database 'warehouse', using 36 threads  2022-02-09 14:11:40 MSG warehouse[4403]: # Compiled for x86_64-pc-linux-gnu/64bit with 128bit integers  2022-02-09 14:11:40 MSG warehouse[4403]: # Found 125.609 GiB available main-memory of which we use 102.371 GiB  2022-02-09 14:11:40 MSG warehouse[4403]: # Copyright (c) 1993 - July 2008 CWI.  2022-02-09 14:11:40 MSG warehouse[4403]: # Copyright (c) August 2008 - 2021 MonetDB B.V., all rights reserved  2022-02-09 14:11:40 MSG warehouse[4403]: # Visit https://www.monetdb.org/ for further information  2022-02-09 14:11:40 MSG warehouse[4403]: # Listening for UNIX domain connection requests on mapi:monetdb:///home/db_user/monetDBDatabase/warehouse/warehouse/.mapi.sock  2022-02-09 14:11:40 MSG warehouse[4403]: # still reading write-ahead log "/home/db_user/monetDBDatabase/warehouse/warehouse/sql_logs/sql/log.95638" (22% done)  2022-02-09 14:11:51 MSG warehouse[4403]: # still reading write-ahead log "/home/db_user/monetDBDatabase/warehouse/warehouse/sql_logs/sql/log.95638" (44% done)  2022-02-09 14:12:02 MSG warehouse[4403]: # still reading write-ahead log "/home/db_user/monetDBDatabase/warehouse/warehouse/sql_logs/sql/log.95638" (67% done)  

This process continues until the system appears to run out of virtual memory as seen in the log extract below:

2022-02-09 14:18:52 MSG warehouse[4403]: # still reading write-ahead log "/home/db_user/monetDBDatabase/warehouse/warehouse/sql_logs/sql/log.95650" (92% done)  2022-02-09 14:18:56 ERR warehouse[4403]: #main thread: GDKmmap: !ERROR: requested too much virtual memory; memory requested: 43317329920, memory in use: 208390400, virtual memory in use: 4354888091904  

I've monitored the system throughout this and the memory usage is well within acceptable limits with the most being used about 1GB out of 126GB. What is strange is the size of the files being produced by MonetDB.

The original database was about 5Tb in size, although I can also see that the write-ahead log(s) found in warehouse/sql_logs/sql are 65GB with another file warehouse/mdbtrace.log taking up another 5Tb.

If I try to remove the write-ahead log(s) then the database does not start citing that the files are missing and the same for mdbtrace.log (I can recreate and post the exact messages if required). Other than this I've tried rebooting the machine. I get the impression that the large size of mdbtrace.log is preventing the virtual memory space to read the write-ahead log(s) by using up space that is needed for virtual memory.

Any assistance in solving these errors so that I can start and connect to the database with mclient would be most appreciated.

Regards,

James

How do I reduce the nex lines of Ruby code?

Posted: 09 Feb 2022 09:18 AM PST

    def route_action(action)      case action      when 1 then @meals_controller.add      when 2 then @meals_controller.list      when 3 then @meals_controller.edit      when 4 then @meals_controller.delete      when 5 then @customers_controller.add      when 6 then @customers_controller.list      when 7 then @customers_controller.edit      when 8 then @customers_controller.delete      when 0 then stop      else        puts "Please press 1, 2, 3, 4, 5, 6, 7, 8 or 0"      end    end  

So I want to reduce this case when is there another way of implementing this?

How to calculate the distance remaining to a geolocation point every second, even along curved paths

Posted: 09 Feb 2022 09:17 AM PST

I need to know essentially every second how much distance is left between a user and their destination point while driving. For example, if a user needs to take a turn in 500 feet, then I want to show the change in distance left until they have to make the turn (400 feet, 300 feet, 200 feet, etc).

How can I do this? I know I can call the Distance Matrix API, but calling it every second will be extremely costly, resource intensive, and it may not even work due to a cap on usage.

I am aware that I can use the Distance Matrix API to find the distance between the user's current location and a destination. I also know that there are functions such as the haversine function, which measures the linear/direct distance between two points on a globe.

The problem is that it does not account for road distance, so if there is a curve it will not properly get the road distance. This concerns me because I don't want to give an incorrect distance remaining to my user.

I do have the longitude and latitude of the user's current location, as well as the destination point.

Python Web Scraping - Failed to extract a name list from the website

Posted: 09 Feb 2022 09:18 AM PST

I failed to extract the first column "Name" from the website. Is there anyone who can help? Website Screenshot The website address is: https://www.dianashippinginc.com/the-fleet/

chromedriver_location = "/Users/jingzhou.zhao/chromedriver" driver = webdriver.Chrome(chromedriver_location) driver.get('https://www.dianashippinginc.com/fleet-employment-table/')

shipname = [] year = []

cookie_address = '//*[@id="ccc-notify-accept"]/span' name_address = '/html/body/div[3]/div/div/div[2]/table/tbody/tr[3]/td[2]/span'

driver.find_element_by_xpath(cookie_address).click() driver.find_element_by_xpath(name_address)

How to query rows grouped by matching string column but only count the most recent row for a specific set of keywords?

Posted: 09 Feb 2022 09:18 AM PST

Have a single table of email events where each row is keyed with a specific outgoing email record fk and a specific recipient user fk. At any given time and in no specific order and even simultaneously from different threads I can have new records dropping into this table. Here are the relevant columns...

id (pk), email_id (fk), user_id (fk), event (string/name), created_at  

I am calculating the overall event counts for a given email, like how many emails were delivered, how many bounced, etc. However I need to ignore specific combinations of email events for a specific user because they become outdated when a newer event comes in. For example if a row says the email was 'deferred' for a specific user but then later a new event row is inserted that says 'delivered' or 'bounced' then I only want the most recently added row of any of those related keywords to be counted once as the current state.

What's a good way to do this at read time? I am having trouble because of the multiple layers of grouping that I need to do and reaching the limits of my SQL chops, here is the query I am trying to enhance as described:

select `event`, COUNT(1) as count, COUNT(DISTINCT user_id) as unique_count  from `email_activity`  where `email_id` = 7518  group by `event`  

For most of the events I want them all counted without any replacement so grouping by only event is fine in those cases, eg if something was a 'click' or 'open' event just total them up.

However, if there are any number of 'deferred', 'bounced', or 'delivered' events for the same email_id/user_id I only want to count the one with the most recent created_at date and ignore all the older ones.

Example row set (email_id, event, user_id, created_at):

7518, "click", 25, 1-20-2021  7518, "click", 73, 1-5-2021  7518, "bounced", 45, 1-19-2021  7518, "deferred", 45, 1-17-2021  7518, "delivered", 19, 1-1-2021  7518, "delivered", 25, 1-1-2021  7518, "delivered", 73, 1-1-2021  

So the queried count for email 7518 would be:

2 "click", 3 "delivered", and 1 "bounced" as the "deferred" row would be ignored for user 45 since it is older (only "bounced", "deferred", and "delivered" events are part of this "only count the latest" rule, all other event names are always counted).

Python code equivalent of Paint Selection Tool

Posted: 09 Feb 2022 09:18 AM PST

I'm building a maya script and I have no idea how to automate the Paint Selection Tool. Is there a mel or python command that will call this tool?

Tailwind CSS 3.0 Disable JIT Mode

Posted: 09 Feb 2022 09:19 AM PST

I Understand that the Tailwind CSS uses a new JIT engine by default from version 3 onwards. AS JIT generates the CSS classes on demand, the text editors/IDE fails to show CSS class suggestions via IntelliSense.

Also, it will be an added advantage for developers during the development process without purging the CSS every time. As Tailwind CDN doesn't support third-party tailwind plugins (E.g tailwind-scrollbar) developers cannot rely on it.

  1. Is there a way to disable the JIT mode
  2. Is there any workaround to overcome the IntelliSense issue?

How to add data on nested array in Firestore with react-native

Posted: 09 Feb 2022 09:19 AM PST

I would like to ask if is it possible to add data in a nested array.The result i want is this enter image description here

But i get this when i add a new rating with the code i use

enter image description here

async function getAll(){      const userEmail= firebase.firestore().collection('users')      const snapshot=await userEmail.where('email','==',index.email).get()      if (snapshot.empty) {        console.log('No matching documents.');        return;      }              snapshot.forEach(doc => {        userEmail.doc(doc.id).set({userRatings2:firebase.firestore.FieldValue.arrayUnion({parking:[markerName]})},{merge:true})        userEmail.doc(doc.id).set({userRatings2:firebase.firestore.FieldValue.arrayUnion({rating:[currentRating]})},{merge:true})                        console.log(userEmail.doc(doc.id));      });    }    getAll()  

why @Qualifier not allowed above constructor?

Posted: 09 Feb 2022 09:18 AM PST

I am learning spring but while i tried below it doesn't work but at the place of constructor while I use method then it works why? Is there any specific reason behind it? My question is why spring designers decided not to allow @Qualifier above constructor but above method?

import org.springframework.beans.factory.annotation.Autowired;  import org.springframework.beans.factory.annotation.Qualifier;    public class Employee {      private Company comp;      @Autowired      @Qualifier(value="beanId")      private Employee(Company comp) {          this.comp=comp;      }      public Company getComp() {          return comp;      }  }  

@Qualifier within argument works.. say below works it's ok

private Employee(@Qualifier(value="beanId") Company comp) {          this.comp=comp;  }  

But @Qualifier works fine above method like below why?

@Qualifier(value="beanId")  private void getEmpDetails(Company comp) {          this.comp=comp;  }  

How to copy file from one location to another location?

Posted: 09 Feb 2022 09:18 AM PST

I want to copy a file from one location to another location in Java. What is the best way to do this?


Here is what I have so far:

import java.io.File;  import java.io.FilenameFilter;  import java.util.ArrayList;  import java.util.List;  public class TestArrayList {      public static void main(String[] args) {          File f = new File(              "D:\\CBSE_Demo\\Demo_original\\fscommand\\contentplayer\\config");          List<String>temp=new ArrayList<String>();          temp.add(0, "N33");          temp.add(1, "N1417");          temp.add(2, "N331");          File[] matchingFiles = null;          for(final String temp1: temp){              matchingFiles = f.listFiles(new FilenameFilter() {                  public boolean accept(File dir, String name) {                      return name.startsWith(temp1);                  }              });              System.out.println("size>>--"+matchingFiles.length);            }      }  }  

This does not copy the file, what is the best way to do this?

No comments:

Post a Comment