Friday, April 29, 2022

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


Vaex expression to select all rows

Posted: 29 Apr 2022 12:26 PM PDT

In Vaex, what expression can be used as a filter to select all rows? I wish to create a filter as a variable and pass that to a function.

filter = True  if x > 5:   filter = y > 20  df['new_col'] = filter & z < 10  

My wish is that if x <= 5 it will ignore the filter (thus I'm trying to use True as a value). Doing it this way gives the error 'bitwise_and' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' What expression will select all rows?

Import data everytime button is pressed

Posted: 29 Apr 2022 12:26 PM PDT

I have one sheet called "Calculator" and another one called "Data" and I need a way (a button I suppose), to able to export the values (numbers) from four cells (A1, B1, C1, D1) in the "Calculator" sheet and import them to the "Data" sheet to A1, B1, C1, D1 and the rows below every time the button is pressed.

Can someone please help?

Import issues and conflicts with create-react-app + eslint + typescript

Posted: 29 Apr 2022 12:26 PM PDT

I'm getting a lot of configuration side effects in ESLINT, where a React project was started with create-react-app and template with typescript.

What is the CORRECT way to correct these problems? I have the same configuration in another project, with a version below create-react-app, and it doesn't give these problems.

ERROR'S:

C:\src\App.test.tsx    2:17  error  "src/App" is not found  node/no-missing-import    C:\src\aee\index.tsx    0:0  error  Parsing error: "parserOptions.project" has been set for @typescript-eslint/parser.  The file does not match your project config: src\aee\index.tsx.  The file must be included in at least one of the projects provided    C:\src\index.tsx    0:0  error  Parsing error: "parserOptions.project" has been set for @typescript-eslint/parser.  The file does not match your project config: src\index.tsx.  The file must be included in at least one of the projects provided  

TSCONFIG.JSON

{      "root": true,      "env": {          "browser": true,          "es2021": true,          "node": true,          "jest": true      },      "settings": {          "react": {              "version": "detect"          }      },      "overrides": [          {              "files": ["src/**/*.ts", "src/**/*.tsx"],              "parser": "@typescript-eslint/parser",              "parserOptions": {                  "project": "./tsconfig.json",                  "ecmaVersion": "latest",                  "sourceType": "module",                  "ecmaFeatures": {                      "jsx": true                  }              },              "plugins": ["react", "react-hooks", "prettier", "jest", "jest-dom", "jsx-a11y", "import", "testing-library", "@typescript-eslint"],              "extends": [                  "airbnb-typescript",                  "eslint:recommended",                  "stylelint",                  "plugin:react/recommended",                  "plugin:react-hooks/recommended",                  "plugin:@typescript-eslint/recommended",                  "plugin:@typescript-eslint/eslint-recommended",                  "plugin:@typescript-eslint/recommended-requiring-type-checking",                  "plugin:jest/recommended",                  "plugin:jest-dom/recommended",                  "plugin:testing-library/react",                  "plugin:jsx-a11y/recommended",                  "plugin:prettier/recommended"              ],              "rules": {                  "react/react-in-jsx-scope": "off",                  "react/jsx-filename-extension": ["error", { "extensions": [".ts", ".tsx"] }],                  "react/jsx-props-no-spreading": "off",                  "react/jsx-uses-react": "off",                  "prettier/prettier": [                      "error",                      {},                      {                          "usePrettierrc": true,                          "fileInfoOptions": {                              "withNodeModules": true                          }                      }                  ]              }          }      ]  }  

ESLINTRC

{      "extends": [          "airbnb",          "airbnb-typescript",          "stylelint",          "eslint:recommended",          "eslint-config-prettier",          "plugin:react/recommended",          "plugin:react-hooks/recommended",          "plugin:@typescript-eslint/eslint-recommended",          "plugin:@typescript-eslint/recommended",          "plugin:@typescript-eslint/recommended-requiring-type-checking",          "plugin:prettier/recommended",          "plugin:markdown/recommended"      ],      "plugins": ["react", "react-hooks", "eslint-plugin-prettier", "@typescript-eslint"],      "parser": "@typescript-eslint/parser",      "parserOptions": {          "project": "./tsconfig.json",          "ecmaVersion": 13,          "sourceType": "module",          "ecmaFeatures": {              "jsx": true          }      },      "env": {          "browser": true,          "es2021": true,          "node": true,          "jest": true      },      "settings": {          "react": {              "version": "detect"          },          "jest": {              "version": "detect"          }      },      "rules": {          "react/jsx-filename-extension": ["error", { "extensions": [".ts", ".tsx"] }],          "react/react-in-jsx-scope": "off",          "react/jsx-props-no-spreading": "off",          "prettier/prettier": [              "error",              {},              {                  "usePrettierrc": true,                  "fileInfoOptions": {                      "withNodeModules": true                  }              }          ]      }  }  

Is there a way to find exactly where the error is taking place?

Posted: 29 Apr 2022 12:25 PM PDT

enter image description here

Is there a way to deduce where(which line of which component) this error is taking place, from the data given?

serverSelectionError when connect to localhost in mongoose?

Posted: 29 Apr 2022 12:25 PM PDT

Hi i tried every thing like(mongoose.connect('mongodb://localhost/blog') but i am not able to connect mongoose to node here is my code....

const express = require('express')  const mongoose = require('mongoose')  const articleRouter = require('./routes/articles')  const app = express()         mongoose.connect('mongodb://localhost:27017/blog')    app.set('view engine','ejs')  

here is the error

/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/connection.js:807 const serverSelectionError = new ServerSelectionError(); ^

MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017 at NativeConnection.Connection.openUri (/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/connection.js:807:32) at /home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/index.js:342:10 at /home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5 at new Promise () at promiseOrCallback (/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10) at Mongoose._promiseOrCallback (/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/index.js:1181:10) at Mongoose.connect (/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongoose/lib/index.js:341:20) at Object. (/home/vishwajeet/webdeve/MARKDOWN-BLOG/server.js:8:10) at Module._compile (node:internal/modules/cjs/loader:1103:14) at Object.Module._extensions..js (node:internal/modules/cjs/loader:1157:10) { reason: TopologyDescription { type: 'Unknown', servers: Map(1) { 'localhost:27017' => ServerDescription { _hostAddress: HostAddress { isIPv6: false, host: 'localhost', port: 27017 }, address: 'localhost:27017', type: 'Unknown', hosts: [], passives: [], arbiters: [], tags: {}, minWireVersion: 0, maxWireVersion: 0, roundTripTime: -1, lastUpdateTime: 774709, lastWriteDate: 0, error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017 at connectionFailureError (/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongodb/lib/cmap/connect.js:375:20) at Socket. (/home/vishwajeet/webdeve/MARKDOWN-BLOG/node_modules/mongodb/lib/cmap/connect.js:295:22) at Object.onceWrapper (node:events:646:26) at Socket.emit (node:events:526:28) at emitErrorNT (node:internal/streams/destroy:157:8) at emitErrorCloseNT (node:internal/streams/destroy:122:3) at processTicksAndRejections (node:internal/process/task_queues:83:21) { [Symbol(errorLabels)]: Set(0) {} } } }, stale: false, compatible: true, heartbeatFrequencyMS: 10000, localThresholdMS: 15, logicalSessionTimeoutMinutes: undefined }, code: undefined }

How to route two machines in different local networks?

Posted: 29 Apr 2022 12:25 PM PDT

I cannot connect from my Ubuntu server (10.10.57.5) back to my personal PC (10.15.0.4) which is on a different local network.

The server has the following ifconfig:

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001          inet 10.10.57.5  netmask 255.255.255.0  broadcast 10.10.57.255          inet6 fe80::9b:3eff:feb0:d6a2  prefixlen 64  scopeid 0x20<link>          ether 02:9b:3e:b0:d6:a2  txqueuelen 1000  (Ethernet)          RX packets 10377  bytes 1620625 (1.6 MB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 10810  bytes 1148431 (1.1 MB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001          inet 10.10.20.5  netmask 255.255.255.0  broadcast 10.10.20.255          inet6 fe80::21:19ff:fe16:76c0  prefixlen 64  scopeid 0x20<link>          ether 02:21:19:16:76:c0  txqueuelen 1000  (Ethernet)          RX packets 3319  bytes 181799 (181.7 KB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 3332  bytes 220914 (220.9 KB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536          inet 127.0.0.1  netmask 255.0.0.0          inet6 ::1  prefixlen 128  scopeid 0x10<host>          loop  txqueuelen 1000  (Local Loopback)          RX packets 213405  bytes 116842426 (116.8 MB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 213405  bytes 116842426 (116.8 MB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

I connect to it through 10.10.57.5, and 10.10.20.0/24 is another network that cannot be reached externally.

This is the route info:

Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  default         _gateway        0.0.0.0         UG    100    0        0 eth0  10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1  10.10.57.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0  _gateway        0.0.0.0         255.255.255.255 UH    100    0        0 eth0  

If I try to ping my local machine from the server, it obviously does not return anything as it's outside the network:

$ ping -c3 10.15.0.4   PING 10.15.0.4 (10.15.0.4) 56(84) bytes of data.    --- 10.15.0.4 ping statistics ---  3 packets transmitted, 0 received, 100% packet loss, time 2048ms  

Then, I tried to configure the network like this:

ifconfig lo:100 10.8.0.1 netmask 255.255.255.0 up  

And suddenly, the ping command suddenly works:

$ ping 10.15.0.4  PING 10.15.0.4 (10.15.0.4) 56(84) bytes of data.  64 bytes from 10.15.0.4: icmp_seq=1 ttl=64 time=0.018 ms  64 bytes from 10.15.0.4: icmp_seq=2 ttl=64 time=0.036 ms  64 bytes from 10.15.0.4: icmp_seq=3 ttl=64 time=0.035 ms  64 bytes from 10.15.0.4: icmp_seq=4 ttl=64 time=0.034 ms  ^C  --- 10.15.0.4 ping statistics ---  4 packets transmitted, 4 received, 0% packet loss, time 3079ms  

However, I cannot see anything in my tcpdump in my local machine:

root@zldes:/tmp# tcpdump -i tun0 icmp                tcpdump: verbose output suppressed, use -v[v]... for full protocol decode  listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes  

Finally, if I host a server on my local machine:

root@zldes:~/tmp# python -m http.server 9090  Serving HTTP on 0.0.0.0 port 9090 (http://0.0.0.0:9090/) ...  

It cannot be reached from the server:

$ wget http://10.15.0.4:9090/users.txt  --2022-04-22 10:23:02--  http://10.15.0.4:9090/test.txt  Connecting to 10.15.0.4:9090... failed: Connection refused.  

If also tried to add a new route without success:

$ route add -net 10.15.0.0 netmask 255.255.255.0 gw 10.15.0.254                   SIOCADDRT: Network is unreachable  

What am I missing?

In Python, If there is a duplicate, use the date column to choose the what duplicate to use

Posted: 29 Apr 2022 12:25 PM PDT

I have code that runs 16 test cases against a CSV, checking for anomalies from poor data entry. A new column, 'Test case failed,' is created. A number corresponding to which test it failed is added to this column when a row fails a test. These failed rows are separated from the passed rows; then, they are sent back to be corrected before they are uploaded into a database.

There are duplicates in my data, and I would like to add code to check for duplicates, then decide what field to use based on the date, selecting the most updated fields.

Here is my data with two duplicate IDs, with the first row having the most recent Address while the second row has the most recent name.

ID MnLast MnFist MnDead? MnInactive? SpLast SpFirst SPInactive? SpDead Addee Sal Address NameChanged AddrChange
123 Doe John No No Doe Jane No No Mr. John Doe Mr. John 123 place 05/01/2022 11/22/2022
123 Doe Dan No No Doe Jane No No Mr. John Doe Mr. John 789 road 11/01/2022 05/06/2022

Here is a snippet of my code showing the 5th testcase, which checks for the following: Record has Name information, Spouse has name information, no one is marked deceased, but Addressee or salutation doesn't have "&" or "AND." Addressee or salutation needs to be corrected; this record is married.

import pandas as pd   import numpy as np    data = pd.read_csv("C:/Users/file.csv", encoding='latin-1' )    # Create array to store which test number the row failed  data['Test Case Failed']= ''  data = data.replace(np.nan,'',regex=True)  data.insert(0, 'ID', range(0, len(data)))    # There are several test cases, but they function primarily the same  # Testcase 1  # Testcase 2  # Testcase 3  # Testcase 4    # Testcase 5 - comparing strings in columns  df = data[((data['FirstName']!='') & (data['LastName']!='')) &                 ((data['SRFirstName']!='') & (data['SRLastName']!='') &                (data['SRDeceased'].str.contains('Yes')==False) & (data['Deceased'].str.contains('Yes')==False)                 )]  df1 = df[df['PrimAddText'].str.contains("AND|&")==False]   data_5 = df1[df1['PrimSalText'].str.contains("AND|&")==False]   ids = data_5.index.tolist()    # Assign 5 for each failed  for i in ids:    data.at[i,'Test Case Failed']+=', 5'      # Failed if column 'Test Case Failed' is not empty, Passed if empty  failed = data[(data['Test Case Failed'] != '')]  passed = data[(data['Test Case Failed'] == '')]      failed['Test Case Failed'] =failed['Test Case Failed'].str[1:]  failed = failed[(failed['Test Case Failed'] != '')]    # Clean up  del failed["ID"]  del passed["ID"]    failed['Test Case Failed'].value_counts()    # Print to console   print("There was a total of",data.shape[0], "rows.", "There was" ,data.shape[0] - failed.shape[0], "rows passed and" ,failed.shape[0], "rows failed at least one test case")    # output two files  failed.to_csv("C:/Users/Failed.csv", index = False)  passed.to_csv("C:/Users/Passed.csv", index = False)  

What is the best approach to check for duplicates, choose the most updated fields, drop the outdated fields/row, and perform my test?

Python kivy RecursionError: maximum recursion depth exceeded in comparison

Posted: 29 Apr 2022 12:24 PM PDT

I am trying to create a simple app with kivy in python but when i run this code i get following error RecursionError: maximum recursion depth exceeded in comparison

import wikipedia  from kivy.app import App  from kivy.uix.popup import Popup  from kivy.uix.label import Label  from kivy.uix.gridlayout import GridLayout  from kivy.uix.textinput import TextInput  from kivy.uix.button import Button      class GridLayout(GridLayout):        def __init__(self, **kwargs):          super(GridLayout, self).__init__()           # Number of columns          self.cols = 1                        # Second grid Layout          self.second_layout = GridLayout()          self.second_layout.cols = 2            # Creating a text field to show the result of entered query          self.query_result = TextInput(text='', size_hint_y=0.8)          self.second_layout.add_widget(self.query_result)  # Adding query result on the screen            # Creating a text input field to get the query from user          self.query = TextInput(text='', multiline=False, hint_text="Enter your Query", size_hint_y=0.1, font_size=20)          self.second_layout.add_widget(self.query)              # Adding Second layout on the screen          self.add_widget(second_layout)              # Creating a submit button          self.submit_button = Button(text="Submit", size_hint_y=0.1, font_size=40, on_press=self.submit)          self.add_widget(self.submit_button)            def submit(self, instance):              try:                 query_result_from_wikipedia = wikipedia.page(self.query.text).summary                 self.query_result.text = query_result_from_wikipedia              except:                 popup = Popup(title='Query Not Found',                        content=Label(text='Try to Search Anything else'),                        size_hint=(None, None), size=(400, 400))                 popup.open()      class MyApp(App):      def build(self):          return GridLayout()      if __name__ == '__main__':      MyApp().run()  

But when i remove the second gridlayout from it it runs without errors

New column to calculate percentage

Posted: 29 Apr 2022 12:24 PM PDT

Need to add a column with percentage for each "adh_classi" by "stop_code" ex.

"Stop_code" Count adh_Classi 10013 32 Early 10013 101 Late 10013 317 On-Time

Total for 10013 = 450

Early-> 7.11% (32/450) Late -> 22.44% (101/450)

I do not have much to Access experience

Can two Pagerduty services use the same email address?

Posted: 29 Apr 2022 12:24 PM PDT

I am setting up technical services in Pagerduty for a team and want all of them to use the same email integration, specifically, the same email address.

Is this allowed? If so, will there be any drawbacks?

How to execute python pandas data frame through string?

Posted: 29 Apr 2022 12:24 PM PDT

I am getting one string from another function as below

I am referring to the below string as the "result" that I am receiving from another function. This is a custom string that is built from another environment and sent to my code to execute this through "pandas"

df.groupby(['DeviceName']).agg({'DaysRemaining': ['max']})  

Now, I want to process this string using "exec". But it is returning nothing.

If I run it individually, this is working completely fine.

If "result" is the string that I am getting as mentioned above and I execute it as mentioned in the below code, I am getting None.

But running it individually works completely fine

test = exec(result)  print(result)    None  

How to batch update a column based on date condition from another column?

Posted: 29 Apr 2022 12:24 PM PDT

Here is my initial test table

IdRecord FechaRegistro IdDimFecCorte
1 2022-04-25 23:45:00.000 20220430
2 2022-04-24 18:07:00.000 20220430
3 2022-03-10 19:04:00.000 20220331
4 2022-03-22 16:55:00.000 20220331
5 2022-02-10 22:06:00.000 20220331
6 2022-02-14 02:06:00.000 20220331
7 2022-01-30 21:55:00.000 20220331

I need to run a batch update in that table so the column IdDimFecCorte shows the date (as an type integer) of the last day of the month based on the date from column FechaRegistro . As you can see, records 1,2,3,4 already satisfy this requirement but I require to run it retrospectively (for example with records 5,6,7)

My desired output should be

IdRecord FechaRegistro IdDimFecCorte
1 2022-04-25 23:45:00.000 20220430
2 2022-04-24 18:07:00.000 20220430
3 2022-03-10 19:04:00.000 20220331
4 2022-03-22 16:55:00.000 20220331
5 2022-02-10 22:06:00.000 20220228
6 2022-02-14 02:06:00.000 20220228
7 2022-01-30 21:55:00.000 20220131

db<>fiddle

C: How to properly reallocate an array of strings

Posted: 29 Apr 2022 12:26 PM PDT

This seems like a simple question and probably a simple answer but I am trying to read in words from a text file and allocate each word to a dynamically allocated array of strings:

char** words = calloc(8, sizeof(char*));  

(It must be allocated this way.)

And then I must resize the array as needed. My problem comes when I try to use realloc() for my array. I do it like so:

if(index == MAX-1){ // reallocate if needed      words = (char**) realloc(words, sizeof(*words)*2); MAX*=2;      printf("Re-allocated %lu character pointers.\n", MAX);  }  

Where MAX is the max number of elements that can be stored in the array.

My array is populated with correct values but when realloc is called some strings appear to be missing! Several indexes are not populated anymore and I get a memory error when trying to print the array out as they are missing somehow.

Here is how I allocate the strings and store them at the index:

words[index] = malloc(strlen(temp)+1);  words[index] = strdup(temp); // copy the word over using strdup  

What's going wrong?

When I try to install nodemon i get this error message

Posted: 29 Apr 2022 12:26 PM PDT

npm ERR! code EACCES  npm ERR! syscall mkdir  npm ERR! path /usr/local/lib/node_modules/nodemon  npm ERR! errno -13  npm ERR! Error: EACCES: permission denied, mkdir'/usr/local/lib/node_modules/nodemon'  npm ERR!  [Error: EACCES: permission denied, mkdir '/usr/local/lib/node_modules/nodemon'] {  npm ERR!   errno: -13,  npm ERR!   code: 'EACCES',  npm ERR!   syscall: 'mkdir',  npm ERR!   path: '/usr/local/lib/node_modules/nodemon'  npm ERR! }  npm ERR!   npm ERR! The operation was rejected by your operating system.  npm ERR! It is likely you do not have the permissions to access this file as the current user  npm ERR!  npm ERR! If you believe this might be a permissions issue, please double-check the  npm ERR! permissions of the file and its containing directories, or try running  npm ERR! the command again as root/Administrator.      npm ERR! A complete log of this run can be found in:  

Looks like the problem is with the package generator-karma, not sure if this is the problem or not.

Can anyone show me what i need to do to get this installing correctly.

Thanks Sanil

C# getting Vector3 doesn´t work from List of objects

Posted: 29 Apr 2022 12:25 PM PDT

Hello I have 2D Game simulation with ship and asteroids and i would like to shoot that asteroids but if i take their Position it dont shoot and run so slow, but if i get there Vector3.Forward it run well but shooting only up. I have list of objets and one of them is Asteroid they have own Position LinearVelocity and radius. In the shooting script isn´t problem only in taking te vectors from the asteroid and getting to it so if someone could help i would apriciate that

where i take the position:

    private Vector3 asteroidPos;        public Vector3 AsteroidPosition      {          get          {              return asteroidPos;          }          set          {              foreach (WorldObject Asteroid in m_objects.OfType<Asteroid>())              {                  asteroidPos = Asteroid.Position;              }          }      }  

where i use the positon

    public Ship Ship { get; set; }      public World World = new World();      /// <summary>      /// Called when ship is being updated, Ship property is never null when OnUpdate is called.      /// </summary>      public void OnUpdate()      {          Vector3 asteroidPos = World.AsteroidPosition;          Ship.Shoot(asteroidPos);      }  

Import 10 csv files and export as 10 worksheets of 1 xlsx

Posted: 29 Apr 2022 12:24 PM PDT

I have 10 csv files and want to save all the files as 10 worksheets of 1 xlsx file.

data1.csv,data2.csv,.......,data10.csv.

Attempt

import glob  import numpy as np  import pandas as pd    all_datasets = pd.DataFrame()  for x in glob.glob("*.csv"):      df = pd.read_csv(x)    # I want to export the corresponding csv files as 10 worksheets of 1 xlsx     #initialze the excel writer  writer = pd.ExcelWriter('all_datasets_combinedworksheets.xlsx', engine='xlsxwriter')      frames = {'sheetName_1': df1, 'sheetName_2': df2,          'sheetName_3': df3,'sheetName_4': df4}    for sheet, frame in  frames.iteritems(): # .use .items for python 3.X      frame.to_excel(writer, sheet_name = sheet)    #critical last step  writer.save()    

I'm open to other approach, please share your code, thanks in advance

How to use Azure Key Vault in Apache Spark Connector for SQL Server

Posted: 29 Apr 2022 12:25 PM PDT

Following example from Azure team on Using Apache Spark connector for SQL Server is using hard-coded user name and password. But I am storing password in Azure key Vault for security requirements.

Question: In the following example code, instead of using hard-coded password, how can we use a secret (password) stored in an Azure Key Vault?

    server_name = "jdbc:sqlserver://{SERVER_ADDR}"      database_name = "database_name"      url = server_name + ";" + "databaseName=" + database_name + ";"            table_name = "table_name"      username = "username"      password = "password123!#" # Please specify password here            try:        df.write \          .format("com.microsoft.sqlserver.jdbc.spark") \          .mode("overwrite") \          .option("url", url) \          .option("dbtable", table_name) \          .option("user", username) \          .option("password", password) \          .save()      except ValueError as error :          print("Connector write failed", error)  

Creating new columns that contain the value of a specific index

Posted: 29 Apr 2022 12:25 PM PDT

I have tried multiple methods that get me to a point close to but not exactly where I want to be with the final output. I am trying to first create a few columns that contain a specific within the raw dataframe based on it's position, afterwards I am trying to make a particular row the header row and skip all the rows that were above it.

Raw input:

    |           NA            |  NA.1 |  NA.2  |  NA.3 |  0   | 12-Month Percent Change |  NaN  |  NaN   |  NaN  |  1   | Series Id: CUUR0000SAF1 |  NaN  |  NaN   |  NaN  |  2   |       Item: Food        |  NaN  |  NaN   |  NaN  |  3   | Year | Jan    Feb Mar Apr May Jun Jul  4   | 2010 |-0.4    -0.2    0.2 0.5 0.7 0.7 0.9  5   | 2011 |1.8 2.3 2.9 3.2 3.5 3.7 4.2    

Code used:

df1['View Description'] = df1.iat[0,0]  df1['Series ID'] = df1.iat[1,1]  df1['Series Name'] = df1.iat[2,1]  df1  

Resulted to:

    NA  NA.1    NA.2    NA.3    NA.4    NA.5    NA.6    NA.7    View Description    Series ID   Series Name  0   12-Month Percent Change NaN NaN NaN NaN NaN NaN NaN 12-Month Percent Change CUUR0000SAF1    Food  1   Series Id:  CUUR0000SAF1    NaN NaN NaN NaN NaN NaN 12-Month Percent Change CUUR0000SAF1    Food  2   Item:   Food    NaN NaN NaN NaN NaN NaN 12-Month Percent Change CUUR0000SAF1    Food  3   Year    Jan Feb Mar Apr May Jun Jul 12-Month Percent Change CUUR0000SAF1    Food  4   2010    -0.4    -0.2    0.2 0.5 0.7 0.7 0.9 12-Month Percent Change CUUR0000SAF1    Food  5   2011    1.8 2.3 2.9 3.2 3.5 3.7 4.2 12-Month Percent Change CUUR0000SAF1    Food  6   2012    4.4 3.9 3.3 3.1 2.8 2.7 2.3 12-Month Percent Change CUUR0000SAF1    Food  7   2013    1.6 1.6 1.5 1.5 1.4 1.4 1.4 12-Month Percent Change CUUR0000SAF1    Food  

Last thing is I want to make the header the row 3 and remove all the rows above it. BUT still keep the three columns at the end: 1) View Description, Series ID, Series Name.

Any suggestions with an efficient way that this can be done as next I want to scale it up with a for loop or something that would do this process for x10 files.

Thanks in advance!

Python Boto3 - print ec2 instance with specific tag Key

Posted: 29 Apr 2022 12:24 PM PDT

I have 350 ec2 instances that I need to get some information from I am using the code

#Instance id,Instance type,Instance State,Instance Name  import boto3  client = boto3.client('ec2')  Myec2=client.describe_instances()  for pythonins in Myec2['Reservations']:   for printout in pythonins['Instances']:    for printname in printout['Tags']:     print(printout['InstanceId'], printout['InstanceType'], printname['Value'])  

The problem is the instances are tagged with 3 key value pairs and the code is repeating for and printing output 3x for each instances using the different tags.

I only want to print out result for instance with tag Key = Name.

the output right now is

i-0e8d25ed03569252a t3a.medium DB002-old  i-0e8d25ed03569252a t3a.medium NW02  i-0e8d25ed03569252a t3a.medium daily    i-0738894210d94f6d0 t3a.2xlarge DB110-new  i-0738894210d94f6d0 t3a.2xlarge daily  i-0738894210d94f6d0 t3a.2xlarge NW02  

desired output

i-0e8d25ed03569252a t3a.medium DB002-old  i-0738894210d94f6d0 t3a.2xlarge DB110-new  

Error trying to set up db-less kong using docker-compose

Posted: 29 Apr 2022 12:26 PM PDT

I am trying to set up kong db-less I have created a docker file as below:

FROM kong  USER 0  RUN mkdir -p /kong/declarative/  COPY kong.yml /usr/local/etc/kong/kong.yml  USER kong  

and a docker-compose file

version: "3.8"    networks:   kong-net:    services:    kong:      container_name: kong-dbless      build:        context: .        dockerfile: Dockerfile      restart: unless-stopped      networks:        - kong-net      environment:        - KONG_DATABASE=off        - KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl        - KONG_PROXY_ACCESS_LOG=/dev/stdout        - KONG_ADMIN_ACCESS_LOG=/dev/stdout        - KONG_PROXY_ERROR_LOG=/dev/stderr        - KONG_ADMIN_ERROR_LOG=/dev/stderr        - KONG_DECLARATIVE_CONFIG=/usr/local/etc/kong/kong.yml      ports:        - "8001:8001"        - "8444:8444"        - "80:8000"        - "443:8443"  

and kong.yaml is as below

 _format_version: "1.1"   _transform: true     services:   - host: mockbin.org     name: example_service     port: 80     protocol: http     routes:   - name: example_route       paths:       - /mock       strip_path: true  

I run docker-compose up but I errors in the log

*- [+] Running 1/0

  • Container kong-dbless Created 0.0s
  • Attaching to kong-dbless
  • kong-dbless | 2022/04/29 01:31:52 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
  • kong-dbless | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
  • kong-dbless | 2022/04/29 01:31:52 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:553: error parsing declarative config file /kong/declarative/kong.yml:
  • kong-dbless | /kong/declarative/kong.yml: No such file or directory
  • kong-dbless | stack traceback:
  • kong-dbless | [C]: in function 'error'
  • kong-dbless | /usr/local/share/lua/5.1/kong/init.lua:553: in function 'init'
  • kong-dbless | nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:553: error parsing declarative config file /kong/declarative/kong.yml:
  • kong-dbless | [C]: in function 'error'
  • kong-dbless | /usr/local/share/lua/5.1/kong/init.lua:553: in function 'init'
  • kong-dbless | init_by_lua:3: in main chunk*

Does anybody know what the problem is and how should I fix it?


Also I tried this but did not work:

Dockerfile

FROM kong  COPY kong.yml /  RUN cp /etc/kong/kong.conf.default /etc/kong/kong.conf  

docker-compose

version: "3.8"    networks:   kong-net:    services:    kong:      container_name: kong-dbless      build:        context: .        dockerfile: Dockerfile  #    restart: unless-stopped      networks:        - kong-net      healthcheck:        test: [ "CMD", "curl", "-f", "http://kong:8000" ]        interval: 5s        timeout: 2s        retries: 15      environment:        - KONG_DATABASE=off        - KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl        - KONG_PROXY_ACCESS_LOG=/dev/stdout        - KONG_ADMIN_ACCESS_LOG=/dev/stdout        - KONG_PROXY_ERROR_LOG=/dev/stderr        - KONG_ADMIN_ERROR_LOG=/dev/stderr        - KONG_DECLARATIVE_CONFIG=kong.yml      ports:        - "8001:8001"        - "8444:8444"        - "80:8000"        - "443:8443"  

In which aspects runBlocking is worse than suspend?

Posted: 29 Apr 2022 12:24 PM PDT

It's quite clearly stated in official documentation that runBlocking "should not be used from a coroutine". I roughly get the idea, but I'm trying to find an example where using runBlocking instead of suspend functions negatively impacts performance.

So I created an example like this:

import java.time.Instant  import java.time.format.DateTimeFormatter  import kotlinx.coroutines.async  import kotlinx.coroutines.awaitAll  import kotlinx.coroutines.delay  import kotlinx.coroutines.runBlocking  import kotlin.time.Duration.Companion.seconds    private val time = 1.seconds    private suspend fun getResource(name: String): String {      log("Starting getting ${name} for ${time}...")      delay(time)      log("Finished getting ${name}!")      return "Resource ${name}"  }    fun main(args: Array<String>) = runBlocking {      val resources = listOf("A", "B")          .map { async { getResource(it) } }          .awaitAll()      log(resources)  }    fun log(msg: Any) {      val now = DateTimeFormatter.ISO_INSTANT.format(Instant.now())      println("$now ${Thread.currentThread()}: $msg")  }  

This gives the expected output of:

2022-04-29T15:52:35.943156Z Thread[main,5,main]: Starting getting A for 1s...  2022-04-29T15:52:35.945570Z Thread[main,5,main]: Starting getting B for 1s...  2022-04-29T15:52:36.947539Z Thread[main,5,main]: Finished getting A!  2022-04-29T15:52:36.948334Z Thread[main,5,main]: Finished getting B!  2022-04-29T15:52:36.949233Z Thread[main,5,main]: [Resource A, Resource B]  

From my understanding getResource(A) was started and the moment it arrived at delay it gave the control back and then getResource(B) was started. Then they both waited in a single thread and when the time passed, they both were again executed - everything in one second as expected.

So now I wanted to "break" it a little and replaced getResource with:

private fun getResourceBlocking(name: String): String = runBlocking {      log("Starting getting ${name} for ${time}...")      delay(time)      log("Finished getting ${name}!")      "Resource ${name}"  }  

and called it from the main method in place of getResource.

and then again I got:

2022-04-29T15:58:41.908015Z Thread[main,5,main]: Starting getting A for 1s...  2022-04-29T15:58:41.910532Z Thread[main,5,main]: Starting getting B for 1s...  2022-04-29T15:58:42.911661Z Thread[main,5,main]: Finished getting A!  2022-04-29T15:58:42.912126Z Thread[main,5,main]: Finished getting B!  2022-04-29T15:58:42.912876Z Thread[main,5,main]: [Resource A, Resource B]  

So it still took only 1 second to run and B was started before A finished. At the same time there doesn't seem to be any additional threads spawned (everything is in Thread[main,5,main]). So how does this work? How calling blocking functions in async makes it execute "concurrently" in a single thread anyway?

Is there a way to add range slicer to Excel worksheet for data filtering?

Posted: 29 Apr 2022 12:26 PM PDT

I'd like to add range slicers to an Excel worksheet for quick data filtering. However, I could not find such a control from Excel, and based on quick Googleing, it seems to be included in Power BI. I'd assume you can either get it from some (free) add-in or create it yourself.

Many thanks for anyone who can help on this!

Emacs Org-mode: org-agenda-custom-commands and hiding future scheduled tasks

Posted: 29 Apr 2022 12:24 PM PDT

I have set my org-agenda-custom-commands to (among others) this:

(setq org-agenda-custom-commands        `(          ("x"           "Scheduled tasks with Prio"           ((tags-todo "+PRIORITY={A}"                       ((org-agenda-overriding-header "Scheduled Prio-A TODOs")                        (org-agenda-skip-function                         '(org-agenda-skip-entry-if 'unscheduled))))            (tags-todo "+PRIORITY={B}"                       ((org-agenda-overriding-header "Scheduled Prio-B TODOs")                        (org-agenda-skip-function                         '(org-agenda-skip-entry-if 'unscheduled))))            (tags-todo "+PRIORITY={C}"                       ((org-agenda-overriding-header "Scheduled Prio-C TODOs")                        (org-agenda-skip-function                         '(org-agenda-skip-entry-if 'unscheduled))))            (tags-todo "+PRIORITY={D}"                       ((org-agenda-overriding-header "Scheduled Prio-D TODOs")                        (org-agenda-skip-function                         '(org-agenda-skip-entry-if 'unscheduled))))            (agenda)))  ;; snip    

Now I would like to hide all tasks scheduled in the future. I can do this via

(progn    (setq org-agenda-todo-ignore-scheduled 'future)    (setq org-agenda-tags-todo-honor-ignore-options t))  

But this affects all of my other org-agenda-custom-commands. I would like to limit it to just one custom command.

So how can I modify my custom command so that it hides the future tasks?

Flutter formkey exception displayed _CastError (Null check operator used on a null value)

Posted: 29 Apr 2022 12:24 PM PDT

I have created a form by wrapping a column widget in Form() as so

Form(        key: formKey,        child: Column(          children: [  

But when I try to set final isValid = formKey.currentState!.validate(); to check, I get an exception has occured message _CastError (Null check operator used on a null value) Any advice? Example Pic

Accessing Walmart API and setting up product mapping

Posted: 29 Apr 2022 12:26 PM PDT

To start: I'm completely new to working with APIs, so please bear with me.

My first question is related to getting access to the Walmart API. I see the example code to generate time stamp and signature. How do I run this file? I've looked at YouTube videos, the Walmart tutorial, and other posts in this forum and am still a little stuck.

Second, I'm guessing this file needs to be included in the actual application to continue to be able to access the products?

Third, my goal is to map only a subset of the product catalog for users of the app to view. Let's use 'soda' as an example. Is it the Taxonomy API I need to use? And how do I limit the available products a user can search?

Note: This will be implemented in a Flutter application, if it makes any difference.

set.seed in for loop

Posted: 29 Apr 2022 12:26 PM PDT

I'm doing some analysis and I had to impute some values. To do so, I write this chunk of code:

A)

set.seed(1)  for (i in 2:length (Dataset[-c(8,11)])) {         Dataset[,i]<-impute(Dataset[,i], "random")  }  

[[The -c(8,11) is for two characters columns]]

This does not give me any error so I'm not asking for this, but: is it correct to put Set.seed(1) outside the for loop? Because the second time I ran this code the results (at the end of the analysis) were different. So I put Set.seed(1) inside the for loop, like this:

B)

for (i in 2:length (Dataset[-c(8,11)])) {         set.seed(1)        Dataset[,i]<-impute(Dataset[,i], "random")  }  

This gave me a reproducible result, but if I put outside again the set.seed, now the result is stuck as in B (when it was inside the for loop).

So I'm quite confused: why does this happen? What is wrong with the syntax? How can I effectively write a for loop with a set.seed to impute some values in the data set?

How to insert git info in environment variables using Vite?

Posted: 29 Apr 2022 12:26 PM PDT

How to get information from git about current branch, commit date and other when using Vite bundler?

PyCharm doesn't recognize Python 3.10, how do I configure it?

Posted: 29 Apr 2022 12:24 PM PDT

When I use the python version 3.10 it is recognized as Python 3.1 in PyCharm and this is a deprecated version. My OS is Windows 10.

I'd to like to know how to fit it, I got no answer on the PyCharm issues.

How does NaiveBayes in Weka performs smooting? [closed]

Posted: 29 Apr 2022 12:25 PM PDT

I would like the get some help regarding Naive Bayes implementation in Weka. Firstly, I am interested why a Numeric to Nominal filter is required to run the classifier and how it works. Also, I am interested how Naive Bayes in Weka is dealing with the missing values (smoothing procedure).

AWS Error Message: A conflicting conditional operation is currently in progress against this resource

Posted: 29 Apr 2022 12:25 PM PDT

I'm getting this error intermittently.

I have a program that uses the java aws sdk and loads over the 10s of thousands of small files to s3. I see this error intermittently.

Could not find any helpful answer after doing a quick search on the internet.

Note the calling program is single threaded. The underlying aws java sdk does seem to use worker threads.

Status Code: 409, AWS Service: Amazon S3, AWS Request ID: 75E16E8DE2193CA6, AWS Error Code: OperationAborted, AWS Error Message: A conflicting conditional operation is currently in progress against this resource. Please try again., S3 Extended Request ID: 0uquw2YEoFamLldm+c/p412Lzd8jHJGFBDz3h7wN+/4I0f6hnGLkPMe+5LZazKnZ      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:552)      at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:289)      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2648)      at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:578)      at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:503)  

No comments:

Post a Comment