Monday, August 30, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


excel update reference for named cells and ranges using vba

Posted: 30 Aug 2021 08:18 AM PDT

I would like to create a script in excel vba to change the Reference of a bunch of defined names. I have a revised workbook with updated data and a new name; I would want to change each defined name reference to the updated workbook. For example:

change the reference of the named range

from '[c:\files\[factorsrev1.xls]'!:$K$15:$N$76

to '[c:\files\[newfactorsrev2.xlsm]!:$K$15:$N$76

I tried using the (change-multiple-named-cells-and-ranges) post, as a guide, but so far it hasn't worked.

What I have so far:

Sub RangeRename()      Dim N As Name      For Each N In ActiveWorkbook.Names        N.RefersTo = WorksheetFunction.Substitute(N.RefersTo, "factorsrev1.xls", "newfactorsrev2.xlsm")      Next N    End Sub  

Sure would be nice not having to edit each one manually...help is most appreciated.

Dynamically change outerHeight using Javascript/jQuery without refreshing the page

Posted: 30 Aug 2021 08:17 AM PDT

I am trying to dynamically change the marginTop of #mainPart, which is the section right after nav item with navbar-fixed-top class.

The problem I am having is that with this function, I must refresh the page for the changes to happen - for the margin to be set. But, I want it to happen with Submit button pressed. Submit button does not refresh the page and it just outputs data based on the the queried keyword results. Submit button itself in HTML has ng-click with do_Submit function.

I have created below function and added it to do_Submit function, but it only works when I refresh the page, instead of when I press Submit button

$(window).load(function() {    maxHeight = $('#main-navbar').outerHeight();    alert("Height of main-navbar: " + maxHeight);    $('#mainPart').css({ marginTop : maxHeight + 5 + 'px' });   });  

Note than #main-navbar is the ID for the nav part in HTML and #mainPart is the ID for the part right after nav part.

What can I do to dynamically change the marginTop without refreshing the page? Thank you!

Unable to connect to solr server

Posted: 30 Aug 2021 08:17 AM PDT

i am using a solr instance which i populated with my docs. I started it without specifying it's port, so it's default is 8983. Now i am trying to analyze some of it's document using a streaming expression, the lack of examples is giving me some headaches, but i think i wrote it right

exp= "select(search(core_test,zkHost=\"localhost:8983\", q=\"id\":"+str(id)+", fl=\"testo\", rows=1000), analyze(testo, testo) as res)"    r=requests.get('http://localhost:8983/solr/core_test/stream?expr='+exp).json()  

At first i didn't specify the zkhost since solr instance is local and documentation affirms it's unnecessary, but i got reported that "zookeper host cannot be null" so i modified inserting the zk address. documentation specifies that it's usually port+1000, but i tried this way and also the same port and i always get the same result:

'EXCEPTION': 'java.util.concurrent.TimeoutException: Could not connect to ZooKeeper localhost:8983 within 15000 ms'  

can anyone explain me what i'm doing wrong?

Append Option in Select on change in Multiselect

Posted: 30 Aug 2021 08:17 AM PDT

I use jQuery to append Options in select tag. I append options on basis of a multiselect tag. Here is my code HTML Multiselect

<select multiple="multiple" class="form-control kt-selectpicker" name="ranch[]" id="ranch" onchange="multiRanchFields()">         @foreach ($ranches as $ranch)           <option value={{ $ranch->id }} selected>{{ $ranch->ranch_name }}</option>        @endforeach    </select>  

Other Select

   <select class="form-control kt-selectpicker" name="meeting_ranch_location" id="meeting_ranch_location">        <option></option>   </select>  

JavaScript

        function multiRanchFields() {              var count = $('#ranch option:selected').length;              var ranch_ids = $('#ranch').val();              for (let id = 0; id < ranch_ids.length; id++) {                  $.ajax({                      url: `/get_ranch_name/${ranch_ids[id]}`,                      method: 'GET',                      data: ranch_ids[id],                      success: function(response) {                          $("#meeting_ranch_location").append('<option value=' + ranch_ids[id] + '>' + response.name +                              '</option>');                      }                  });              }              $("#meeting_ranch_location").selectpicker('refresh');          }      

But what it does is it appends more options than one...For example if I select one option it does not append any options. When I select two options is appends 1 option and when I select 3 it appends 3 options but two of them are same and one is new...and when I select 4 options There are 6 options appended and only One is unique...

What I want to do is to append as many options as many are selected in multiselect new appended options will have same names selected in multiselect

Pandas rolling and shift by 24 hour intervals

Posted: 30 Aug 2021 08:17 AM PDT

I have a rolling window that's shifted forward so the window is over the next 3200 data points:

response_times['Response Time Window'] = response_times['Response Time'].shift(-3200).rolling(3200, min_periods = 0).median()

How do I shift and roll on a 24h basis? I am aware that you can use '24h' for rolling but not for shift. How do I do bypass this issue?

How to count search string in list of log files?

Posted: 30 Aug 2021 08:17 AM PDT

How to count number of lines which contains search string occurrences for files contained in a ZIP file. Each file name contains a date and you should check only files that are matching a given date range specified as startDate and numberOfDays. All files contained in a ZIP archive have following naming pattern: logs_-access.log (for example: logs_2018-02-27-access.log).

Sample Data ZIP file "logs-27_02_2018-03_03_2018.zip": ZIP archive contains following files:

logs_2018-02-27-access.log  logs_2018-02-28-access.log  logs_2018-03-01-access.log  logs_2018-03-02-access.log  logs_2018-03-03-access.log  

Sample result

{    "logs_2018-02-27-access.log" : 23,    "logs_2018-02-28-access.log" : 18,    "logs_2018-03-01-access.log" : 40  }  

Implementation code:

import java.io.File;  import java.io.IOException;  import java.time.LocalDate;  import java.util.HashMap;  import java.util.Map;  import java.util.UUID;    import net.lingala.zip4j.ZipFile;    public class LogsAnalyzer {        private final static String TEMP_DIR = System.getProperty("java.io.tmpdir");        public Map<String, Integer> countEntriesInZipFile(String searchQuery, File zipFile, LocalDate startDate, Integer numberOfDays) throws IOException {          HashMap<String, Integer> result = new HashMap<>();            File targetDir = new File(TEMP_DIR, UUID.randomUUID().toString());          unzip(zipFile, targetDir);                    // How to implement this?            return result;      }        public static void unzip(File targetZipFilePath, File destinationFolderPath) {          try {              ZipFile zipFile = new ZipFile(targetZipFilePath);              zipFile.extractAll(destinationFolderPath.toString());          } catch (Exception e) {              throw new IllegalStateException("Unable to unpack zip file");          }      }    }  

Test code:

import java.io.File;  import java.io.IOException;  import java.nio.file.Paths;  import java.time.LocalDate;  import java.util.Map;    import org.junit.jupiter.api.BeforeEach;  import org.junit.jupiter.api.Test;    import static org.assertj.core.api.Assertions.assertThat;    public class LogsAnalyzerTest {        private File zipPath;      private Map<String, Integer> entries;          @BeforeEach      public void setUp() throws Exception {          zipPath = Paths.get(getClass().getClassLoader().getResource("logs-27_02_2018-03_03_2018.zip").toURI()).toFile();      }          @Test      public void shouldContainEntriesForCorrectDays() throws IOException {          //given          LogsAnalyzer logsAnalyzer = new LogsAnalyzer();            //when          entries = logsAnalyzer.countEntriesInZipFile("Mozilla", zipPath, LocalDate.of(2018, 2, 27), 3);            //then          assertThat(entries)                  .hasSize(3)                  .containsKey("logs_2018-03-01-access.log")                  .containsKey("logs_2018-02-28-access.log")                  .containsKey("logs_2018-02-27-access.log");      }        @Test      public void shouldReturnLineCountsForMozilla() throws IOException {          //given          LogsAnalyzer logsAnalyzer = new LogsAnalyzer();            //when          entries = logsAnalyzer.countEntriesInZipFile("Mozilla", zipPath, LocalDate.of(2018, 2, 27), 3);            //then          assertThat(entries).hasSize(3)                  .containsEntry("logs_2018-03-01-access.log", 23)                  .containsEntry("logs_2018-02-28-access.log", 18)                  .containsEntry("logs_2018-02-27-access.log", 40);      }        @Test      public void shouldReturnLineCountsForSafari() throws IOException {          //given          LogsAnalyzer logsAnalyzer = new LogsAnalyzer();            //when          entries = logsAnalyzer.countEntriesInZipFile("Safari", zipPath, LocalDate.of(2018, 2, 27), 4);            //then          assertThat(entries).hasSize(4)                  .containsEntry("logs_2018-03-02-access.log", 6)                  .containsEntry("logs_2018-03-01-access.log", 16)                  .containsEntry("logs_2018-02-28-access.log", 14)                  .containsEntry("logs_2018-02-27-access.log", 25);      }    }  

How to set a logo of a desktop shortcut when creating the shortcut with a .bat file?

Posted: 30 Aug 2021 08:17 AM PDT

I am trying to use a .bat file to create a shortcut but am having trouble setting the icon to the shortcut. this is what i am trying. help would be greatly appreciated.

@echo off    set SCRIPT="%TEMP%\%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %SCRIPT%  echo sLinkFile = "%USERPROFILE%\Desktop\Test_Shortcut.lnk" >> %SCRIPT%  echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %SCRIPT%  echo oLink.TargetPath = "V:\Test_File.bat" >> %SCRIPT%  echo oLink.IconLocation ="V:\_Test\Icons\Test_Icon.ico" >> %SCRIPT%  echo oLink.Save >> %SCRIPT%    cscript /nologo %SCRIPT%  del %SCRIPT%  

Get Bitbucket project to Intellij

Posted: 30 Aug 2021 08:17 AM PDT

I am very new to Bitbucket, trying Clone remote Java project to my local repository.

I have IntelliJ 2018, Ultimate edition and first thing I did was VCS->Enable ..->Git. Now I am doing this : VCS->Git->Remotes. I am getting "Get Remotes" window and I click on "+" sign. Another window pops up : Name : origin, URL : <I provide a URL for Bitbucket Repo>. Click "Ok". I am getting this message : Remote URL Test failed : unable to update URL base from redirection.

Strange thing is I don't see ".git" files in any of Bitbucket repositories, like there is in GitHub or GitLab.

Is there a way to limit windows proxy?

Posted: 30 Aug 2021 08:17 AM PDT

Is there a way to limit Windows proxy to a specific process? So only that single process use the specified HTTP proxy

Thanks

How to set fetch data to an array React Native

Posted: 30 Aug 2021 08:16 AM PDT

Having a bit of trouble setting my fetch data to my list object.

I have

   this.state = {       ...       list: [],       ...     }  

and then the fetch

 handleSearch = () => {      const restaurantSearchUrl = `https://api.geoapify.com/v2/places?...`      fetch(restaurantSearchUrl)        .then(response => response.json())        .then(result => this.setState({list: result}))        .catch( e => console.log(e))      }  

When I console.log the result instead of using setState (IE: result => console.log(result)), the log returns the fetch data without an issue. But when I use setState to set the list to the result, it returns undefined. Is there a way to have it return the actual fetch data?

jenkins error: The NODE_ENV variable is not set. Defaulting to a blank string

Posted: 30 Aug 2021 08:16 AM PDT

i have shell script written in jenkins and refering secrets through keybase. in docker compose yaml file i have declared environment variable to define branch like staging production and dev. also have seperate start.sh file for 3 branches when i build them getting above mentioned error and im not able to access service through postman.

like i have start_prd.sh content as below

docker-compose stop docker-compose build NODE_ENV=production docker-compose up -d

and docker compose yaml file as below

version: "3"

services: gdsp-liftandlearn-service: restart: always container_name: gdsp-liftandlearn-service image: node:14 user: "node" networks: gdsp-net: ipv4_address: 172.27.0.10 working_dir: /home/node/app environment: - NODE_ENV=${NODE_ENV} volumes: - "/home/ubuntu/gdsp-liftandlearn-service:/home/node/app" expose: - "3001" command: "npm start"

networks: gdsp-net: external: true

How to add every first character from every column?

Posted: 30 Aug 2021 08:17 AM PDT

If a have for example: nem ava ma

Output would be: nam eva ma

This is My code:

import java.util.ArrayList; import java.util.Scanner;

class Message {

static void gridStr(String str)  {              int l = str.length();      int k = 0, row, column;      row = (int) Math.floor(Math.sqrt(l));      column = (int) Math.ceil(Math.sqrt(l));        if (row * column < l)      {          row = column;      }       char s[][] = new char[row][column];                  for (int i = 0; i < row; i++)      {          for (int j = 0; j < column; j++)          {                              if(k < str.length())                  s[i][j] = str.charAt(k);              k++;          }      }      for (int i = 0; i < row; i++)      {          for (int j = 0; j < column; j++)          {              if (s[i][j] == 0)              {                  break;              }              System.out.print(s[i][j]);          }          System.out.println("");     }        

}

In comment i will put main, so i find only answers for first element of string but not all code for matrix.

Issues accessing ClearDb MySql from Heroku hosted NodeJS/Express app

Posted: 30 Aug 2021 08:17 AM PDT

I'm in the process of getting my React/NodeJS/Express app with a MySql database up and running on Heroku.

I'm using ClearDb to host my MySql database in production.

I am using the mysql2 npm package to connect to mysql using connection pools.

I've successfully deployed the app, and can get my login screen to show up in the Heroku staging environment.

When the app tries to run the first query that hits the database, after 10 seconds, I see this message in the logs:

2021-08-30T15:04:44.867293+00:00 app[web.1]: executing DB query  2021-08-30T15:04:54.877125+00:00 app[web.1]: Error: connect ETIMEDOUT  2021-08-30T15:04:54.877132+00:00 app[web.1]:     at /app/server/common/db.js:27:19  2021-08-30T15:04:54.877133+00:00 app[web.1]:     at processTicksAndRejections (internal/process/task_queues.js:95:5)  

The query is very simple, and the database barely has any data in the database. The query is running against a table with less than 20 rows in it.

Here is what I've tried:

  1. Connect my app running on my local machine to ClearDb's production instance. Works fine.

  2. Connect to ClearDb's production instance via MySql Workbench. Works fine.

how to disable click method on nested div element

Posted: 30 Aug 2021 08:17 AM PDT

1 <div class="param_content" (click)="onParamClick()">  2   <div class="heading-warning">  3     <div >{{ _paDetail.name }}</div>  4      <div>  5        <img src="assets/att-im.svg" (click)="showInfo()">  6      </div>  7   </div>          8 </div>  

In above code, when I click on image line 5 and expected to invoke showInfo() method but along with this onParamClick() method is also executing. How to disable onParamClick() method on image click?

Assign value based on lookup dictionary in a multilevel column Pandas

Posted: 30 Aug 2021 08:17 AM PDT

The objective is to assign value to column Group based on the comparison between the value in column my_ch and look-up dict dict_map

The dict_map is define as

dict_map=dict(group_one=['B','D','GG','G'],group_two=['A','C','E','F'])  

Whereas the df is as below

first  my_ch       bar            ...       foo       qux            second             one       two  ...       two       one       two  0          A  0.037718  0.089609  ...  0.202885  0.706059 -2.280754  1          B  0.578452  0.039445  ... -0.153135  0.178715 -0.040345  2          C  2.139270  1.104547  ...  0.989953 -0.280724 -0.739488  3          D  0.733355  0.227912  ... -1.359441  0.761619 -1.119464  4          G -1.565185 -1.070280  ...  0.458847  1.072471  1.724417  

This comparison should produce the output as below

first  Group my_ch       bar            ...       foo       qux            second             one       two  ...       two       one       two  0    group_two    A  0.037718  0.089609  ...  0.202885  0.706059 -2.280754  1    group_one    B  0.578452  0.039445  ... -0.153135  0.178715 -0.040345  2    group_two    C  2.139270  1.104547  ...  0.989953 -0.280724 -0.739488  3    group_one    D  0.733355  0.227912  ... -1.359441  0.761619 -1.119464  4    group_one    G -1.565185 -1.070280  ...  0.458847  1.072471  1.724417  

My impression, this can be simply achieved via the line

df[('Group', slice ( None ))]=df.loc [:, ('my_ch', slice ( None ))].apply(lambda x: dict_map.get(x))  

However, the compiler return an error of

TypeError: unhashable type: 'Series'  

Im thinking of converting the series into Dataframe type to bypass this issue, but I wonder there is more reasonable way of solving this issue.

The full code to reproduce the above error is

import pandas as pd  import numpy as np  dict_map=dict(group_one=['B','D','GG','G'],group_two=['A','C','E','F'])  arrays = [["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],            ["one", "two", "one", "two", "one", "two", "one", "two"]]  tuples = list(zip(*arrays))    index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])  df = pd.DataFrame(np.random.randn(5, 8), index=["A", "B", "C","D",'G'], columns=index)  df = df.rename_axis ( index=['my_ch'] ).reset_index()  df[('Group', slice ( None ))]=df.loc [:, ('my_ch', slice ( None ))].apply(lambda x: dict_map.get(x))  

Edit:

df['Group']=df['my_ch'].apply(lambda x: dict_map.get(x))  

Produced Group of None

first  my_ch       bar                 baz  ...       foo       qux           Group  second             one       two       one  ...       two       one       two        0          A  1.220946  0.714748  0.053371  ... -1.743287  0.400862 -1.066441  None  1          B  0.606736  0.844995  0.579328  ... -0.472185  1.102245  0.454315  None  2          C  1.666148 -0.333102  1.950425  ... -0.021484  3.178110 -0.176937  None  3          D -0.673474  2.263407 -0.074996  ... -0.605594  1.410987 -1.253847  None  4          G  0.652557  2.271662 -0.569529  ... -0.549246 -0.021359 -0.532386  None  

Django on Heroku not uploading media to S3

Posted: 30 Aug 2021 08:17 AM PDT

I'm working on a project using Django(3) in which I want to upload static files to S3 bucket using boto3

Here's setttings.py:

if USE_S3:      # aws settings      AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')      AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')      AWS_STORAGE_BUCKET_NAME = os.getenv('AWS_STORAGE_BUCKET_NAME')      AWS_DEFAULT_ACL = 'public-read'      AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'      AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'}      # s3 static settings      AWS_LOCATION = 'static'      STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{AWS_LOCATION}/'      STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'      # STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'  else:      STATIC_URL = '/assets/'      STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')    MEDIA_URL = 'mediafiles/'  MEDIA_ROOT = os.path.join(BASE_DIR, 'mediafiles')      django_on_heroku.settings(locals())  options = DATABASES['default'].get('OPTIONS', {})  options.pop('sslmode', None)  STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'  

My site is loading correctly and while deployment collectstatic copied all files to S3 bucket but now if I upload a file from django admin, its not uploading to S3 bucket, there's wasn't any mediafiles folder in the bucket, so I manually created that directly but files are not uploading there.

Create a cron job for a python file to run on second day each month [closed]

Posted: 30 Aug 2021 08:16 AM PDT

Hello I need to create a cron job for my python script to run on the second day on each month. What is the command line that I need in order to do it?

how to index a column in MySQL within "CREATE TABLE" command

Posted: 30 Aug 2021 08:17 AM PDT

Is there a way to index the non-unique email column below within the CREATE TABLE command?

Instead of this:

CREATE TABLE addresses (      phone_number VARCHAR(12) PRIMARY KEY,      email TINYTEXT,  );  ALTER TABLE addresses ADD INDEX (email);  

something like this:

CREATE TABLE addresses (      phone_number VARCHAR(12) PRIMARY KEY,      email TINYTEXT INDEX,  );  

How to load xls file from an Amazon S3 bucket and convert to xlsx and save to Amazon S3

Posted: 30 Aug 2021 08:17 AM PDT

I am trying to load an xls file and convert to xlsx from an Amazon S3 bucket. I had tried

    import xlrd       from openpyxl.workbook import Workbook      from openpyxl.reader.excel import load_workbook, InvalidFileException      s3_client = boto3.client('s3', region_name='us-east-1')      obj = s3_client.get_object(Bucket=s3_bucket, Key=s3_key)      binary_data = obj['Body'].read()      workbook = xlrd.open_workbook(s3_client)    

but got an error

expected str, bytes or os.PathLike object, not S3  Traceback (most recent call last):    File "/usr/local/lib/python3.6/site-packages/xlrd/__init__.py", line 110, in open_workbook      filename = os.path.expanduser(filename)    File "/tmp/1630334916208-0/lib64/python3.6/posixpath.py", line 235, in expanduser      path = os.fspath(path)  TypeError: expected str, bytes or os.PathLike object, not S3  

I am using linux server thus cannot use win32.client to do the converet(if I am right, since it can only be installed on windows).

really appreciate if someone knows how to do:

  1. read xls file from s3
  2. convert xls to xlsx and save in s3.

thank you!!

Can not run projects grey run button - Intellij

Posted: 30 Aug 2021 08:16 AM PDT

intellij image

I cant run anything, how do i fix the grey run button?

Ive tried to add a new configuration but i cant add a main class

Where does LocalStrategy's function param gets its params, or how to make variable that is in async function available in another function?

Posted: 30 Aug 2021 08:18 AM PDT

My code:

const LocalStrategy = require('passport-local').Strategy;  //called some hash modules    function initialize(passport, getUserByEmail, getUserById) {    var user_copy;    const authenticateUser = async(email, password, done) => {      const userr = await getUserByEmail(email).then(function(userr) {        return userr      })        user_copy = userr; //I can't reach userr so I create copy of it on accessible variable.        if (userr == null) {        return done(null, false, {          message: "No user with this email..."        });      }        try {        //hashing part      } catch (e) {        console.log(e);        return done(e);      }    }      passport.use(new LocalStrategy({      usernameField: 'email',      passwordField: 'password'    }, authenticateUser));          /////////////////////Problem/////////////////////    passport.serializeUser((user_copy, done) => done(null, user_copy[0]._id)); //In here I get error( at the under line too). It see user_copy undefined.     passport.deserializeUser((id, done) => {      return done(null, getUserById(user_copy[0]._id));    })  }    /////////////////////////////////////////////////       module.exports = initialize

Actually if I able to call getUserByEmail outside of the box there would be no problem but I don't know where do authenticateUser get email param so I tried to finish my job in authenticateUser but couldn't find solution.

Where I called initialize:

initializePassport(      passport,      async function email(email) {          await client.connect();          const db = client.db(dbName);          const collection = db.collection(collectionName);            found = await collection.find({ email: email }).toArray().then(user => { return user });            client.close();            return found;      },      async function id() {          await client.connect();          const db = client.db(dbName);          const collection = db.collection(collectionName);            found = await collection.find({ id: user.id }).toArray().then(user => { return user });            client.close();            return found;      }  );
As you see there is email param but where does it take it from?

I just want to able to access userr or any copy of it. I tried too much way but couldn't find any way. There is no problem with another thing I controlled everything.

How to do an action when button is pressed

Posted: 30 Aug 2021 08:17 AM PDT

I am creating a log in screen for my app, I created the buttons but I dont know how to make my program do something when one of these buttons are pressed.Here is my code(I dont think it will be usefull)

LRESULT CALLBACK WndProc(HWND hwnd, UINT Message, WPARAM wParam, LPARAM lParam) {  switch(Message) {            case WM_CREATE:          CreateWindow(TEXT("button"), TEXT("CONFIRM"), WS_VISIBLE | WS_CHILD, 355, 400, 95, 35, hwnd, NULL, NULL , NULL);          CreateWindow(TEXT("button"), TEXT("SIGN UP"), WS_VISIBLE | WS_CHILD, 50, 400, 95, 35, hwnd, NULL, NULL, NULL);          CreateWindow(TEXT("static"), TEXT("USERNAME:"), WS_VISIBLE | WS_CHILD, 50, 40, 80, 15, hwnd, NULL, NULL, NULL);          CreateWindow(TEXT("static"), TEXT("EMAIL:"), WS_VISIBLE | WS_CHILD, 85, 100, 45, 15, hwnd, NULL, NULL, NULL);          CreateWindow(TEXT("static"), TEXT("PASSWORD:"), WS_VISIBLE | WS_CHILD, 50, 160, 83, 15, hwnd, NULL, NULL, NULL);          CreateWindow(TEXT("edit"), TEXT(""), WS_VISIBLE | WS_CHILD, 135, 40, 200, 17, hwnd, NULL, NULL, NULL);          CreateWindow(TEXT("edit"), TEXT(""), WS_VISIBLE | WS_CHILD, 135, 100, 200, 17, hwnd, NULL, NULL, NULL);          CreateWindow(TEXT("edit"), TEXT(""), WS_VISIBLE | WS_CHILD, 135, 160, 200, 17, hwnd, NULL, NULL, NULL);          break;      /* Upon destruction, tell the main thread to stop */      case WM_DESTROY: {          PostQuitMessage(0);          break;      }            /* All other messages (a lot of them) are processed using default procedures */      default:          return DefWindowProc(hwnd, Message, wParam, lParam);  }  return 0;  }    /* The 'main' function of Win32 GUI programs: this is where execution starts */  int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) {  WNDCLASSEX wc; /* A properties struct of our window */  HWND hwnd; /* A 'HANDLE', hence the H, or a pointer to our window */  MSG msg; /* A temporary location for all messages */    /* zero out the struct and set the stuff we want to modify */  memset(&wc,0,sizeof(wc));  wc.cbSize        = sizeof(WNDCLASSEX);  wc.lpfnWndProc   = WndProc; /* This is where we will send messages to */  wc.hInstance     = hInstance;  wc.hCursor       = LoadCursor(NULL, IDC_ARROW);    /* White, COLOR_WINDOW is just a #define for a system color, try Ctrl+Clicking it */  wc.hbrBackground = (HBRUSH)(COLOR_WINDOW+1);  wc.lpszClassName = "WindowClass";  wc.hIcon         = LoadIcon(NULL,"XBM LOGO.png"); /* Load a standard icon */  wc.hIconSm       = LoadIcon(NULL, "A"); /* use the name "A" to use the project icon */    if(!RegisterClassEx(&wc)) {      MessageBox(NULL, "Window Registration Failed!","Error!",MB_ICONEXCLAMATION|MB_OK);      return 0;  }    hwnd = CreateWindowEx(WS_EX_CLIENTEDGE,"WindowClass","Verification",WS_VISIBLE|WS_OVERLAPPEDWINDOW,      400, /* x */      75, /* y */      500, /* width */      600, /* height */      NULL,NULL,hInstance,NULL);    if(hwnd == NULL) {      MessageBox(NULL, "Window Creation Failed!","Error!",MB_ICONEXCLAMATION|MB_OK);      return 0;  }    /*      This is the heart of our program where all input is processed and       sent to WndProc. Note that GetMessage blocks code flow until it receives something, so      this loop will not produce unreasonably high CPU usage  */  while(GetMessage(&msg, NULL, 0, 0) > 0) { /* If no error is received... */      TranslateMessage(&msg); /* Translate key codes to chars if present */      DispatchMessage(&msg); /* Send it to WndProc */  }  return msg.wParam;  

} NOTE: windows.h and iostream are included but there are not any other libraries Thank you :)

How to describe the model with a dynamic amount of repetition objects inside

Posted: 30 Aug 2021 08:18 AM PDT

I need to serialize and send several equal objects but with different key names which depend on amount number of them

{    "object1": {      "name": "random",      "pass": true    },    "object2": {      "name": "random",      "pass": false    },    "object3": {      "name": "random",      "pass": true    }  }  

I use Lombok @Builder + Jackson @JsonProperty for body models describing to be sent but have no ideas how to handle the case when the same object might be added several times with numeric increasing key name without code duplication as at an example below.

@Builder  public class Sample {        @JsonProperty("object1")      private RandomObject object1;      @JsonProperty("object2")      private RandomObject object2;      @JsonProperty("object3")      private RandomObject object3;    }  

Solr Underlying core creation failed while creating collection

Posted: 30 Aug 2021 08:17 AM PDT

Underlying core creation failed while creating collection: workbrainJob I'm getting this error when trying to create a collection on Solr 8.9.0, Java 11

Collection: workbrainJob operation: create failed:org.apache.solr.common.SolrException: Underlying core creation failed while creating collection: workbrainJob  at org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:371)  at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:270)  at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:524)  at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:218)  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)  at java.base/java.lang.Thread.run(Thread.java:834)  

Edited:

org.apache.solr.common.SolrException: Error CREATEing SolrCore 'workbrainJob_shard1_replica_n1': Unable to create core [workbrainJob_shard1_replica_n1] Caused by: The configset for this collection was uploaded without any authentication in place, and use of <lib> is not available for collections with untrusted configsets. To use this component, re-upload the configset after enabling authentication and authorization.  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1358)  at   

What does !! operator mean in R

Posted: 30 Aug 2021 08:17 AM PDT

Can anybody explain, please, what for do we need !!, !!! or {{}} operators from rlang? I tried to learn more about quasiquotation but did not get anything.

I've reached several posts on curly-curly operator on Stack and understood that we use {{ when we are passing dataframe's variables (or other sub-objects of our objects) into the function. But after reading about quote/unquote I was completely confused about all of these operators and their usage.

Why do we need it, why some functions do not read arguments with out it, and, finally, how do they actually work?

I will appreciate if you put the answer in the most simple way that even I will understand (maybe with examples?).

Is there a shorthand for selecting only one dataframe's columns after a join?

Posted: 30 Aug 2021 08:17 AM PDT

I'm working in scala with a dataframe, but the dataframe has ~60 columns.

In a Databricks pipeline, we've split a few columns out along with an identity column to validate some data, resulting in a 'reference' dataframe. I'd like to join it back to the main, large dataframe, and insert the validated data into the original column.

To keep things simple, I'd like the resultant dataframe to match the schema of the original, so none of the reference columns.

On a small scale, this isn't too hard:

 myDF = myDF    .join(refDF,          myDF("Identity") === refDF("RefIdentity"),          "inner")    .withColumn("Foo", $"refFoo")    .select("Identity","Foo","Column2","Column3"...)  

This turns into a huge pain when dealing with large numbers of columns. Is there a quicker way to select only the columns from myDF after the withColumn operation?

How can I capture the output of Perl string eval?

Posted: 30 Aug 2021 08:16 AM PDT

I have a piece of code that is formed through string processing and I am trying to run an eval to capture the output of the code into another string. The piece of code is formed some steps itself, which means it's not in body of Perl program in one piece. This is why I am using eval.

I tried using Tiny::Capture without capture, capture_merged, capture_stdout and also Safe with Safe->new->reval().

use strict;  use warnings;  use Capture::Tiny 'capture';  use Capture::Tiny 'capture_stdout';  use Capture::Tiny 'capture_merged';    # $num_loops is defined inside main code body.  my $num_loops = 10;    #In reality, new code is formed through a series of steps.  my $new_code = "foreach \$idx (0..\$num_loops-1) {print \"I am at iteration number \$idx\n\";}";    my $out = capture_merged {eval $new_code;};  print $out;  

I'm using Perl 5.30.

The piece of code above isn't printing anything out. Neither does it print any error in log. I have given intermediate prints in original code to make sure the string thrown into eval has real code, so that's not an issue.

What is the issue in the code above?

Why when i will evaluate my model by two losses and two targets i get the error :Expected float32, got 'auto' of type 'str' instead

Posted: 30 Aug 2021 08:17 AM PDT

I have the below custom loss function:

        def custom_loss(q_k):           def loss(y_true,y_pred):             return y_true * tf.math.log(y_pred + q_k)            return loss  

q_k and y_true are tensors with the shape TensorShape([17105, 19]).

My Model is as below:

        from tensorFlow import keras          model = Sequential()          inputs = keras.Input(shape=(19,))          layer1 = Dense(38, activation='relu')(inputs)          layer2 = Dense(64, activation='relu')(layer1)          layer3 = Dense(38, activation='relu')(layer2)            outputs1 = keras.layers.Dense(19, activation='softmax',           name='loss1')(layer3)          model = keras.Model(inputs=[inputs], outputs=outputs1,           name='taget_distr')          model.compile(loss=           [custom_loss(q_k),tf.keras.losses.MeanSquaredError] ,            metrics=['accuracy'], optimizer='adam')  

then i fit the model :

        model.fit(X,[y1,y2],epochs=50, batch_size= len(X_train))   

X is my train data and y1 and y2 are the targets. both of them are tensor.

what I want is one output that evaluates by two targets and losses. but i get the below error:

     /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function  *      return step_function(self, iterator)  /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:842 step_function  **      outputs = model.distribute_strategy.run(run_step, args=(data,))  /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run      return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica      return self._call_for_each_replica(fn, args, kwargs)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica      return fn(*args, **kwargs)  /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:835 run_step  **      outputs = model.train_step(data)  /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:789 train_step      y, y_pred, sample_weight, regularization_losses=self.losses)  /usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py:201 __call__      loss_value = loss_obj(y_t, y_p, sample_weight=sw)  /usr/local/lib/python3.7/dist-packages/keras/losses.py:141 __call__      losses = call_fn(y_true, y_pred)  /usr/local/lib/python3.7/dist-packages/keras/losses.py:245 call  **      return ag_fn(y_true, y_pred, **self._fn_kwargs)  /usr/local/lib/python3.7/dist-packages/keras/losses.py:310 __init__  **      super().__init__(mean_squared_error, name=name, reduction=reduction)  /usr/local/lib/python3.7/dist-packages/keras/losses.py:227 __init__      super().__init__(reduction=reduction, name=name)  /usr/local/lib/python3.7/dist-packages/keras/losses.py:88 __init__      losses_utils.ReductionV2.validate(reduction)  /usr/local/lib/python3.7/dist-packages/keras/utils/losses_utils.py:82 validate      if key not in cls.all():  /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper      return target(*args, **kwargs)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1935 tensor_equals      self, other = maybe_promote_tensors(self, other)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1335 maybe_promote_tensors      ops.convert_to_tensor(tensor, dtype, name="x"))  /usr/local/lib/python3.7/dist-packages/tensorflow/python/profiler/trace.py:163 wrapped      return func(*args, **kwargs)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1566 convert_to_tensor      ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:346 _constant_tensor_conversion_function      return constant(v, dtype=dtype, name=name)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:272 constant      allow_broadcast=True)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:290 _constant_impl      allow_broadcast=allow_broadcast))  /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_util.py:457 make_tensor_proto      _AssertCompatible(values, dtype)  /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_util.py:337 _AssertCompatible      (dtype.name, repr(mismatch), type(mismatch).__name__))    TypeError: Expected float32, got 'auto' of type 'str' instead.                       

RxJava 2.x Observe BehaviorProcessor as Publisher

Posted: 30 Aug 2021 08:18 AM PDT

Good day. I have a questions about how to observe hot publisher with Observer with long-timed process instructions with RxJava 2.x i have a telemetry source as mqtt client , it have complex structure with 2 Flowables that combine latest and ended on Behavior Processor. Now i need to observe this Behavior Processor in separate thread without ability to somehow interfere on it. I need to implement some backpressure control, that log skipped elements , and do something long with elements,that new flow is able to process. Can some one advice me , how it can be achieved.

Combine two columns data into one column leaving null values

Posted: 30 Aug 2021 08:17 AM PDT

i have table having four columns like this below

Table - subscription having data like this

  part_id     subscription   policylicense    enterpriselic     part1        sub1          null                null     part2        sub1          pl1                 null     part3        sub1          null                enterpr1  

I would like to get the data like this below

  part_id     subscription   license         part2        sub1          pl1                      part3        sub1          enterpr1                  

how to get the combined license data into one column leaving null values in the same table.. i am using sql server here

Could any one please help on this that would be very grateful to me .. Many Thanks in advance..

No comments:

Post a Comment