Saturday, September 18, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


Filter out a nestled key value

Posted: 18 Sep 2021 08:39 AM PDT

I have an object that looks like this:

const data =  {  "supportPaneltext": [{          "brand": "BMW",          "paragraphList": [{              "rowList": [                  "Europe, Middle East, Africa, Asia, Pacific"              ]          }]      },      {          "brand": "Mercedez",          "paragraphList": [{                  "rowList": [                      "Europe, Middle East, Africa, Asia, Pacific"                  ]              }            ]      }  ]  }  

I want to filter out the "brand" value to an array. The result would look like: ["BMW", Mercedez"]

I have tried to filter out by:

Object.values(data).filter((item) => item.brand)   

Without luck, What do I miss?

Can somebody help me to create REST with framework RestEasy

Posted: 18 Sep 2021 08:39 AM PDT

Task:

Create a Java web application that contains a REST resource for adding numbers. We can send any number of numbers to the application. The numbers are summed and returned.

REST should be accessible as: /api/addUpNumbers

Numbers are sent to REST resource in JSON format: [1,2,3,4,5,6,7,8]

The result is also returned in JSON format:

{ result: 36 }

Execution:

The end result should be a small maven web project using Embedded version of Jetty. The RestEasy framework should be used to implement REST resource.

To implement summation, SummaryService class, should be created This class is injected in REST resource using the Java CDI (SE) for summary calculation. So, the result of the task should demonstrate the use of CDI in the Java SE environment. Use OpenJDK 14.

The final testing of the task should be e.g. with curl, for an example: curl -X POST http://localhost:8080/api/addUpNumbers ...

Client-Server - How do i filter only dedicated software sending data?

Posted: 18 Sep 2021 08:39 AM PDT

I am writing a server and client code where i want to identify on server side official client software sending data.

The protocol is text based, so anyone can write it's own client - and i want to keep it, but want to group the devices by "Official" / "Non Official" based on the 'fingerprint' of my official client application.

Question:

How can i filter out my clients from other clones sending?

I am thinking to send encrypted message from my client to SERVER using the same key, so once decoded if it's valid, then i know its my app code sending - but before i go this way, any other better ideas ?

Reference variables alias in c++

Posted: 18 Sep 2021 08:38 AM PDT

I am new in coding Please explain this code. I didn't get this logic.

#Include void main() { int a = 32, *p = &a; char c ='A', &ch = c; ch+ = a; *p+ = c; cout <<"\n"<<a<<" "<<c<<endl; }`

Is there a formula for the sum?

Posted: 18 Sep 2021 08:38 AM PDT

Is there a formula for the sum from n=1 to x of cos(pi/n) that removes the sigma? I searched on wolfram alpha and I could not find a simplification. I know a little bit of calculus, but it would help if someone could explain how they got their final answer.

Getting two different results by one SQL query

Posted: 18 Sep 2021 08:38 AM PDT

I have created a simple social networking web app so users can post and follow others to see new posts from them.

At home page a user can first see posts from all users he is followong.

but what i want is to make the user see some other random popular posts that will be ordered by Likes.

here what i have done to get posts from users i follow:

SELECT * FROM posts WHERE author_id in    (SELECT followedID FROM follows WHERE   followerID=:myID)   ORDER BY id DESC LIMIT 10  

Now let's say you are following only 1 person. and that person has only one post. here you will see no more than a post!

That's why i want to show more posts when a user has already seen all posts.

i want some easy way to get other posts when the above query has done getting some specific posts.

This is the next query i'd like to execute.

SELECT * FROM posts ORDER BY post_likes ORDER BY id DESC LIMIT 10  

why is the target destination of this je call as such

Posted: 18 Sep 2021 08:38 AM PDT

I'm reading the textbook Randal E. Bryant, David R. O'Hallaron - Computer Systems. A Programmer's Perspective [3rd ed.] (2016, Pearson)

I came across this question and I am not sure how the authors obtained the answer.

In the following excerpts from a disassembled binary, some of the information has been replaced by Xs.   Answer the following questions about these instructions. (You do not need to know anything about the callq instruction here.)            What is the target of the je instruction below?    40042f: 74 F4       je  XXXXXX    400431: 5D              pop %rbp  

The answer given is as follows answer from tb

Could someone help explain why the explanation is as such? I am unsure how they obtained the -12 and the 0xf4 values, and why they would be needed to calculate the target of the je instruction here.

All help is appreciated, thank you!!

Some clarification about this string conversion (double exadecimal to string conversion)

Posted: 18 Sep 2021 08:38 AM PDT

I have the following doubt about the meanding of this URL decode: https://www.convertstring.com/EncodeDecode/UrlDecode

Basically considered that I have this string CG7drXn9fvA%253D (this string is the ID of an object received calling an API).

I first decode CG7drXn9fvA%253D that is transformed into: CG7drXn9fvA%3D (basically it seems to me that the %25 is translated into %.

Then I decode also this CG7drXn9fvA%3D obtaining CG7drXn9fvA= (it seems to me that now the %3d was translated to = character.

Following my doubts:

  1. Taking this table as reference: https://ascii.cl/ it seems to me that it is a conversion from exadecimal format to the related symbol. Is it correct?

  2. Can I say that the original string CG7drXn9fvA%253D and the final string CG7drXn9fvA= have the same "meaning"? (exist an univocal conversion between the original format and the last format)?

Many to Many with Variable Foreign Key

Posted: 18 Sep 2021 08:37 AM PDT

I am drawing an ERD diagram for the following scenario:

A user can have one or many subscriptions. A tv show can be on one or many subscriptions. A movie can also be on one or many subscriptions. The confusion comes from the foreign keys. How would I map it out? What would be the foreign key name etc? A subscription can be linked to a movie, OR a tv show but cannot have both. Is it better to just create a bridge table for each entity type e.g. have a TVSubscription table for TVShows, and a MovieSubscrption table for movies?

I'm confused on how a subscription have be linked to multiple entities.

Is there a way for google colab to take in a .vimrc file?

Posted: 18 Sep 2021 08:37 AM PDT

Under the Vim edition mode of colab,
I am too used to typing j twice to exit the edit mode in regualr vim. All I need is that one simple binding. It would be nice if colab could "take in" a one-liner .vimrc file containing:

imap jj <Esc>  

One way or annother, is jj to <Esc> binding possible in colab?

Function masks itself causing a conflict in my shiny app

Posted: 18 Sep 2021 08:37 AM PDT

I have created a function like:

# Wed Sep  1 07:27:54 2021 ------------------------------  # function designed to read data from a specific table from the aws database  #' Read GA Query 2 data from aws  #' @description Read GA Query data from aws. Simple version, without any data reshaping, nor columns subsetting  #' @export   read_aws_step2_sden1428_ga_q2 <- function() {      x_dt <- sw.uxdashboard::aws_read_table(table_name = sw.config$step2_sden1428_ga_q2)    # x_dt is a local variable within this function, so I use name as simple as possible    }  

and when Im trying to add it to NAMESPACE file with devtools::document()

I get this conflict.

Then I try to run my shiny app and get:

Warning: Error in : 'read_aws_ux_metrics_step2_sden1428_ga_q2' is not an exported object from 'namespace:sw.uxdashboard'    47: server [C:/Users/User/Documents/SDEN1428/R/app_server.R#30]  Error : 'read_aws_ux_metrics_step2_sden1428_ga_q2' is not an exported object from 'namespace:sw.uxdashboard'  

but the function is exported from NAMESPACE I can see it. Any ideas about this mask issue?

enter image description here

Generic methods tutorial

Posted: 18 Sep 2021 08:37 AM PDT

In https://docs.oracle.com/javase/tutorial/java/generics/upperBounded.html, it is suggested to implement a method

public static double sumOfList(List<? extends Number> list) {      double s = 0.0;      for (Number n : list)          s += n.doubleValue();      return s;  }  

and then to invoce it with

List<Integer> li = Arrays.asList(1, 2, 3);  System.out.println("sum = " + sumOfList(li));  
  1. Why is it illegal to put public static <T> double sumOfList(List<T extends Number> list) ?
  2. Is public static <T extends Number> double sumOfList(List<T> list) equivalent? If so, is it of less good style?
  3. When using the aforementioned code, why is it illegal to write System.out.println("sum = " + sumOfList<Integer>(li));?

How to select two parent and one child from 2 different table

Posted: 18 Sep 2021 08:38 AM PDT

CREATE TABLE PEOPLE     (  "ID" NUMBER(*,0) NOT NULL ENABLE,     "NAME" VARCHAR2(30 BYTE),     "GENDER" VARCHAR2(3 BYTE),      PRIMARY KEY ("ID"))    CREATE TABLE RELATIONS     (  "C_ID" NUMBER(*,0) NOT NULL ENABLE,     "P_ID" NUMBER(*,0) NOT NULL ENABLE     )  

insert into PEOPLE (ID, NAME, GENDER)  values (107, 'DAYS', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (145, 'HB', 'M');    insert into PEOPLE (ID, NAME, GENDER)  values (155, 'HANSEL', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (202, 'BLACKSTON', 'M');    insert into PEOPLE (ID, NAME, GENDER)  values (227, 'CRISS', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (278, 'KEFFER', 'M');    insert into PEOPLE (ID, NAME, GENDER)  values (305, 'CANTY', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (329, 'MOZINGO', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (425, 'NOLF', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (534, 'WAUGH', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (586, 'TONG', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (618, 'dimmi', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (747, 'BEANE', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (878, 'CHATMON', 'F');    insert into PEOPLE (ID, NAME, GENDER)  values (904, 'HANSARD', 'F');        insert into RELATIONS (C_ID, P_ID)  values (145, 202);    insert into RELATIONS (C_ID, P_ID)  values (145, 107);    insert into RELATIONS (C_ID, P_ID)  values (278, 305);    insert into RELATIONS (C_ID, P_ID)  values (278, 155);    insert into RELATIONS (C_ID, P_ID)  values (329, 227);    insert into RELATIONS (C_ID, P_ID)  values (534, 586);    insert into RELATIONS (C_ID, P_ID)  values (534, 878);    insert into RELATIONS (C_ID, P_ID)  values (618, 747);    insert into RELATIONS (C_ID, P_ID)  values (618, 904);    insert into RELATIONS (C_ID, P_ID)  values (329, 425);  

Expected output (NOTE: child, father and mother are aliases)

child   father    mother  ------------------------  dimmi   beane     hansard  HB      blackston days  keffer  canty     hansel  mozingo nolf      criss  waugh   tong      chatmon  

Cross entropy vs Kullback Divergence: theory behind

Posted: 18 Sep 2021 08:37 AM PDT

This is not a code question but rather a theoretical question regarding the application of two different loss functions: cross-entropy and Kullback divergence.

I've read this article, where they focus on adding a form of label relaxation where when in classification problems where the outcome is a probability distribution, instead of having one right label only (probability 1 and 0 for the remaining ones), they have the right one with most of the probability mass and the rest with some part of it. With this, the "not so wrong labels" remove a little bit of the model's confidence and remove bias.

The thing is: initially, for the first approach, they use cross-entropy. Then, in the second approach, they start using KullBack.

I want to use this approach so that items that aren't completely wrong are closer in space with the right article.

Example of real labels before: [1 0 0 0 0]
Example of real labels now (for new loss): [0.8 0.1 0.1 0 0]

My questions (as a computer scientist, and not electrical engineer so concepts like entropy are strange for me), are:

  • What's the cross-entropy loss
  • What's the Kullback divergence loss
  • What's the difference between cross-entropy and Kullback Divergence
  • What's the difference between instead of using a one-hot-encoding (the first approach) we use like a vector with spread masses in all possible ways

Thanks!

How to use "webrtc.lib" static library in VS 2019 or CLion Project?

Posted: 18 Sep 2021 08:38 AM PDT

I have been working with WebRtc Development for the Windows Platform. I want to develop webrtc based desktop application. I am doing it from scratch for learning and better understanding.

The normal process of WebRtc Library Compilation:

I have initially started with (Getting Started with WinRTC). I followed the normal compilation process. After that, I have tried multiple ways to generate project files for webrtc such as;

1.Default

gn gen --ide=vs2019 out/Default  

2.Custom Flags

gn gen --ide=vs2019 out/Default --args="use_rtti=true is_clang=false rtc_build_tools=false rtc_include_tests=false rtc_build_examples=false"  

3.Custom Flags

gn gen --ide=vs2019 out\Default --filters=//:webrtc "--args=is_debug=true use_lld=false is_clang=false rtc_include_tests=true rtc_build_tools=true rtc_win_video_capture_winrt=true target_cpu=\"x86\" target_os=\"win\" rtc_build_examples=true rtc_win_use_mf_h264=true enable_libaom=true rtc_enable_protobuf=true"  

For the building process, I have Followed these methods:

With command line:

Run the following command to build the patched WebRTC from the command line.

ninja -C out\Default\x64  

With Visual Studio 2019:

Open the generated Visual Studio solution with the following command:

devenv out\Default\x64\all.sln  

I have tried almost all available combinations to generate build files and to build webrtc.lib static library. I have successfully managed to compile the static webrtc library webrtc.lib for both architectures;

  1. x64 (Default Arch) (For Debug as well as release)
  2. x86 (Custom Arch) (For Debug as well as release)

WebRtc Static Library Description

IMPORTANT:

I have successfully managed to run peerconnection_server.exe and peerconnection_client.exe on windows. These examples are successfully running on localhost.


Using VS2019:

After that, I created a new Console based project using VS2019 to consume generated binaries and followed these steps;

  1. Add include folders

Configuration Properties → C/C++ → General → Additional Include Directories and add the following paths:

c:\webrtc\src  c:\webrtc\src\out\Default\$(Configuration)\$(PlatformTarget)\gen  c:\webrtc\src\third_party\abseil-cpp  c:\webrtc\src\third_party\libyuv\include  
  1. Preprocessor macros:

Click on Preprocessor → Preprocessor Definitions and add the following definitions:

USE_AURA=1;_HAS_EXCEPTIONS=0;__STD_C;_CRT_RAND_S;_CRT_SECURE_NO_DEPRECATE;_SCL_SECURE_NO_DEPRECATE;_ATL_NO_OPENGL;_WINDOWS;CERT_CHAIN_PARA_HAS_EXTRA_FIELDS;PSAPI_VERSION=2;WIN32;_SECURE_ATL;WINUWP;__WRL_NO_DEFAULT_LIB__;WINAPI_FAMILY=WINAPI_FAMILY_PC_APP;WIN10=_WIN32_WINNT_WIN10;WIN32_LEAN_AND_MEAN;NOMINMAX;_UNICODE;UNICODE;NTDDI_VERSION=NTDDI_WIN10_RS2;_WIN32_WINNT=0x0A00;WINVER=0x0A00;NDEBUG;NVALGRIND;DYNAMIC_ANNOTATIONS_ENABLED=0;WEBRTC_ENABLE_PROTOBUF=0;WEBRTC_INCLUDE_INTERNAL_AUDIO_DEVICE;RTC_ENABLE_VP9;HAVE_SCTP;WEBRTC_LIBRARY_IMPL;WEBRTC_NON_STATIC_TRACE_EVENT_HANDLERS=0;WEBRTC_WIN;ABSL_ALLOCATOR_NOTHROW=1;HAVE_SCTP;WEBRTC_VIDEO_CAPTURE_WINRT  
  1. Linker additional library path:

Click on Linker → General → Additional Library Directories and add the following path:

c:\webrtc\src\out\Default\$(Configuration)\$(PlatformTarget)\obj  
  1. WebRTC library name:

Click on Input → Additional Dependencies and add the following filename:

webrtc.lib  

Now, when I simply use this basic implementation such as;

#include <iostream>    #include "rtc_base/thread.h"  #include "rtc_base/logging.h"  #include "rtc_base/ssl_adapter.h"  #include "rtc_base/arraysize.h"  #include "rtc_base/net_helpers.h"  #include "rtc_base/string_utils.h"  #include "rtc_base/signal_thread.h"      int main(int argc, char** argv) {        rtc::InitializeSSL();        return 0;  }  

The program is flooded with two types of errors:

1. LNK2038  mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug'  

and another one is

2. LNK2038  mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2'  

You can also see as given; Here I have used webrtc.lib with Configuration (Release) & Platform (x64).

Release Configuration


Using Clion-2021.2.1 and CMAKE:

Here I have used webrtc.lib with Configuration (Release) & Platform (x86).

CMakeLists.txt is given as;

cmake_minimum_required(VERSION 3.20)  project(NewRtc)    set(CMAKE_CXX_STANDARD 14)      #set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MT")  #set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /MTd")    include_directories(          "c:/webrtc/src"          "C:/webrtc/src/out/Default/x86/obj"          "c:/webrtc/src/third_party/abseil-cpp"          "c:/webrtc/src/third_party/libyuv/include"  )    # error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in main.obj  # Solution:  #1. _ITERATOR_DEBUG_LEVEL = 0 // disabled (for release builds)  #2. _ITERATOR_DEBUG_LEVEL = 1 // enabled (if _SECURE_SCL is defined)  #3. _ITERATOR_DEBUG_LEVEL = 2 // enabled (for debug builds)    add_definitions(          -D_ITERATOR_DEBUG_LEVEL=0          -DUSE_AURA=1          -D_HAS_EXCEPTIONS=0          -D__STD_C          -D_CRT_RAND_S          -D_CRT_SECURE_NO_DEPRECATE          -D_SCL_SECURE_NO_DEPRECATE          -D_ATL_NO_OPENGL          -D_WINDOWS          -DCERT_CHAIN_PARA_HAS_EXTRA_FIELDS          -DPSAPI_VERSION=2          -DWIN32          -D_SECURE_ATL          -DWINUWP          -D__WRL_NO_DEFAULT_LIB__          -DWINAPI_FAMILY=WINAPI_FAMILY_PC_APP          -DWIN10=_WIN32_WINNT_WIN10          -DWIN32_LEAN_AND_MEAN          -DNOMINMAX          -D_UNICODE          -DUNICODE          -DNTDDI_VERSION=NTDDI_WIN10_RS2          -D_WIN32_WINNT=0x0A00          -DWINVER=0x0A00          -DNDEBUG          -DNVALGRIND          -DDYNAMIC_ANNOTATIONS_ENABLED=0          -DWEBRTC_ENABLE_PROTOBUF=0          -DWEBRTC_INCLUDE_INTERNAL_AUDIO_DEVICE          -DRTC_ENABLE_VP9          -DHAVE_SCTP          -DWEBRTC_LIBRARY_IMPL          -DWEBRTC_NON_STATIC_TRACE_EVENT_HANDLERS=0          -DWEBRTC_WIN          -DABSL_ALLOCATOR_NOTHROW=1          -DHAVE_SCTP          -DWEBRTC_VIDEO_CAPTURE_WINRT)        #set(CMAKE_CXX_FLAGS_RELEASE "/MT")  #set(CMAKE_CXX_FLAGS_DEBUG "/MTd")    set(-Dwebrtc.lib)  add_executable(NewRtc main.cpp)  set_property(TARGET NewRtc PROPERTY          MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>")  target_link_libraries(NewRtc            PRIVATE "C:/webrtc/src/out/Default/x86/obj/webrtc.lib"            )  

But when I simply build the project, this error comes up for every implementation of WebRtc. Here you can see:

Clion Error Description

Please assist me that how can I simply use webrtc library in any project on windows suing VS2019 or Clion. I am trying to solve these problems, I have tried multiple solutions over stack overflow and other communities using cmake or adding flags inside project properties such as;

I have tried my best to explain the complete solution and associated problem so that someone might help me accordingly.

seaborn count plot each line to represent total count and non zero count

Posted: 18 Sep 2021 08:39 AM PDT

i would like to plot a seaborn count plot per below :

    df3 = pd.DataFrame({'Class' : [1,1, 2 ,2, 2, 3, 3,3], 'check' : [0,1,0,1,0,1,0,1]})  df3  
  Class check  0 1   0  1 1   1  2 2   0  3 2   1  4 2   0  5 3   1  6 3   0  7 3   1  
sns.countplot(data =df3, y = 'Class', hue = 'check', orient = 'v')  

I would like to get the result like this but :

  • the blue line to represent all counts not 0s only, so the first blue line would have count of 2, the 2nd blue line count of 3...
  • Or even more ideal would be instead of 2 lines per row to have only 1 line, with total value (count of 0s and 1s) and count of 1s on it.

From this,

enter image description here to This:

enter image description here

I want to call a page in my appdrawer programmatically

Posted: 18 Sep 2021 08:39 AM PDT

I want to call a page in my appdrawer on button click in another page without using the drawer. That is when theres a submit want to redirect to another page in my appdrawer

Could not find com.github.linisme:Cipher.so:8581777457 [closed]

Posted: 18 Sep 2021 08:37 AM PDT

When i am building my project i am getting this error

A problem occurred configuring root project 'mob-mola-android'.

Could not resolve all artifacts for configuration ':classpath'. Could not find com.github.linisme:Cipher.so:8581777457. Searched in the following locations: - https://repo.maven.apache.org/maven2/com/github/linisme/Cipher.so/8581777457/Cipher.so-8581777457.pom - https://jcenter.bintray.com/com/github/linisme/Cipher.so/8581777457/Cipher.so-8581777457.pom - https://jitpack.io/com/github/linisme/Cipher.so/8581777457/Cipher.so-8581777457.pom - https://dl.google.com/dl/android/maven2/com/github/linisme/Cipher.so/8581777457/Cipher.so-8581777457.pom Required by: project :

Possible solution:

and i guess this repository is no longer so is there any alternate solution because i have used this library in many applications

How To Replace Lines in a text File Stored in internal storage Android studio

Posted: 18 Sep 2021 08:38 AM PDT

Hey I am Using this Code But There Is A Problem, My File Is In my android device

Full Path = "Android/data/File.txt";

The Problem Is Whenever i tried to run this code it gives me an error

"E/Parcel: Reading a NULL string not supported here."

"E/libEGL: Invalid file path for libcolorx-loader.so"

What Should i do

Thanks in advance guys

public class MainActivityFile extends AppCompatActivity {  public static final String PATH="Android/data/File.txt";    @Override  protected void onCreate(Bundle savedInstanceState) {      super.onCreate(savedInstanceState);      setContentView(R.layout.activity_main_file);              replaceSelected("Do the dishes", "1");    }        public static void replaceSelected(String replaceWith, String type) {        try {           File infile = new File(Environment.getExternalStorageDirectory(), PATH);            BufferedReader file = new BufferedReader(new FileReader(infile));            StringBuffer inputBuffer = new StringBuffer();            String line;              while ((line = file.readLine()) != null) {                inputBuffer.append(line);              inputBuffer.append('\n');          }          file.close();          String inputStr = inputBuffer.toString();            System.out.println(inputStr); // display the original file for debugging            // logic to replace lines in the string (could use regex here to be generic)          if (type.equals("0")) {              inputStr = inputStr.replace(replaceWith + "1", replaceWith + "0");          } else if (type.equals("1")) {              inputStr = inputStr.replace(replaceWith + "0", replaceWith + "1");          }            // display the new file for debugging          System.out.println("----------------------------------\n" + inputStr);            // write the new string with the replaced line OVER the same file          FileOutputStream fileOut = new FileOutputStream(PATH);          fileOut.write(inputStr.getBytes());          fileOut.close();        } catch (Exception e) {          System.out.println("Problem reading file.");      }}  

How to count the unique duplicate values in each column

Posted: 18 Sep 2021 08:37 AM PDT

We have the following dataframe,

df = pd.DataFrame(data = {'A': [1,2,3,3,2,4,5,3],                       'B': [9,6,7,9,2,5,3,3],                       'C': [4,4,4,5,9,3,2,1]})  df  

I want to create a new dataframe where for every column name will show the number of duplicates.

eg. 'B', has two values that are duplicated (9 and 3), I want to print 2 etc

Connecting two ImageViews with each other with a line correctly on different screen-sizes and screen-orientations

Posted: 18 Sep 2021 08:39 AM PDT

I am looking for a way to connect two ImageViews with a line. The position of the ImageViews on the screen always differs. I want to draw these lines as reactions to a click of the user, so I need a programmatic solution. It also needs to work with different sizes of screens (mobile, tablet) and screen orientations (portrait, landscape).

My attempt so far has been to create a custom view and use Canvas to connect these two points:

private Paint paint = new Paint();  private Point pointA, pointB;   @Override      protected void onDraw(Canvas canvas) {          paint.setColor(Color.BLACK);          paint.setStrokeWidth(6);          canvas.drawLine(pointA.x, pointA.y, pointB.x, pointB.y, paint);          super.onDraw(canvas);      }      public void draw() {          invalidate();          requestLayout();      }  

I get the position of the ImageViews with viewA.getLocationOnScreen(positionA) and then adjust this to select the center of the ImageView by adding (viewA.getWidth() / 2) and (viewA.getHeight() / 2). Unfortunately, with different screen sizes and screen orientations, it happens again and again that the points are not connected correctly, i.e. there is an offset. I have tried in various ways to correct the offset (e.g. including the status bar), unfortunately it never works for all screen sizes and orientations. I've also tried to react specifically for different screen sizes and orientations, but then it doesn't work even for almost the same screen sizes (e.g. it works for a Samsung Galaxy S10, but not for a Nokia 7.2). The following is just a sample code, I have tried it in very many different ways and did not get the desired result:

 private int topOffset() {          View globalView = findViewById(R.id.constraintLayoutStartBildschirm);          DisplayMetrics dm = new DisplayMetrics();          this.getWindowManager().getDefaultDisplay().getMetrics(dm);          topOff = dm.heightPixels - globalView.getMeasuredHeight();          return topOff;   }   private int leftOffset() {          View globalView = findViewById(R.id.constraintLayoutStartBildschirm);          DisplayMetrics dm = new DisplayMetrics();          this.getWindowManager().getDefaultDisplay().getMetrics(dm);          leftOff = dm.widthPixels - globalView.getMeasuredWidth();          return leftOff;   }  private void zeichneStriche(ImageView viewA, ImageView viewB, Linien linie) {          int[] positionA = new int[2];          int[] positionB = new int[2];          viewA.getLocationOnScreen(positionA);          viewB.getLocationOnScreen(positionB);          int xCenterA = positionA[0] + (int) (viewA.getWidth() / 1.5 ) - leftOffset();          int xCenterB = positionB[0] + (int) (viewA.getWidth() / 1.5 ) - leftOffset();          int yBottemA = positionA[1] + (int) (viewA.getWidth() / 1.8) - topOffset();          int yBottemB = positionB[1] + (int) (viewB.getWidth() / 1.8) - topOffset();            if(getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE) {              xCenterA = positionA[0] + (int) (viewA.getWidth() / 2 ) - leftOffset() - 20;              xCenterB = positionB[0] + (int) (viewB.getWidth() / 2 ) - leftOffset() - 20;              yBottemA = positionA[1]  + viewA.getHeight() / 2 - topOffset();              yBottemB = positionB[1]  + viewB.getHeight() / 2 - topOffset();          }            DisplayMetrics metrics = new DisplayMetrics();          getWindowManager().getDefaultDisplay().getMetrics(metrics);            float yInches= metrics.heightPixels/metrics.ydpi;          float xInches= metrics.widthPixels/metrics.xdpi;          double diagonalInches = Math.sqrt(xInches*xInches + yInches*yInches);            if((getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE) && diagonalInches>=7.5) {              xCenterA = positionA[0] + (int) (viewA.getWidth() / 2 ) - leftOffset() + 50;              xCenterB = positionB[0] + (int) (viewB.getWidth() / 2 ) - leftOffset() + 50;              yBottemA = positionA[1]  + viewA.getHeight() / 2 - topOffset() +5;              yBottemB = positionB[1]  + viewB.getHeight() / 2 - topOffset() + 5;          }          Point anfang = new Point(xCenterA, yBottemA);          Point ende = new Point(xCenterB, yBottemB);          linie.setPointA(anfang);          linie.setPointB(ende);          linie.draw();      }  

Is there a solution where I can be sure that it will at least work for most screen sizes and orientations? Thanks in advance

Unable to implement any logic to scrape content from innermost pages using puppeteer

Posted: 18 Sep 2021 08:37 AM PDT

I've created a script using puppeteer to scrape the links of different authors from a webpage traversing multiple pages triggering click on the next page button. The script appears to be working in the right way.

Although the content of this site is static, I intentionally used puppeteer within the following script only to learn as to how I can parse content from inner pages.

Given that I wish to go one layer deep to scrape description from such pages. How can I achieve that?

const puppeteer = require('puppeteer');    function run (pagesToScrape) {      return new Promise(async (resolve, reject) => {          try {              if (!pagesToScrape) {                  pagesToScrape = 1;              }              const browser = await puppeteer.launch({headless:false});              const [page] = await browser.pages();              await page.goto("https://quotes.toscrape.com/");              let currentPage = 1;              let urls = [];              while (currentPage <= pagesToScrape) {                  let newUrls = await page.evaluate(() => {                      let results = [];                      let items = document.querySelectorAll('[class="quote"]');                      items.forEach((item) => {                          results.push({                              authorUrl:  'https://quotes.toscrape.com' + item.querySelector("small.author + a").getAttribute('href'),                              title: item.querySelector("span.text").innerText                          });                      });                      return results;                  });                  urls = urls.concat(newUrls);                  if (currentPage < pagesToScrape) {                      await Promise.all([                          await page.waitForSelector('li.next > a'),                          await page.click('li.next > a'),                          await page.waitForSelector('[class="quote"]')                      ])                  }                  currentPage++;              }              browser.close();              return resolve(urls);          } catch (e) {              return reject(e);          }      })  }  run(3).then(console.log).catch(console.error);  

I Can't show my database tables through migration command on powershell terminal (Windows 10)

Posted: 18 Sep 2021 08:37 AM PDT

I tried to run: php artisan migrate

Also to connect to MySQL using terminal on Windows.

I Got this error:

  SQLSTATE[HY000] [2002] No connection could be made because the target machine actively refused it (SQL: select * from information_schema.tables where table_schema = firstproject and table_name = migrations and table_type = 'BASE TABLE')      at C:\Users\ekind\Desktop\LaravelTest\FirstProject\vendor\laravel\framework\src\Illuminate\Database\Connection.php:692      688▕         // If an exception occurs when attempting to run a query, we'll format the error      689▕         // message to include the bindings with SQL, which will make this exception a      690▕         // lot more helpful to the developer instead of just the database's errors.      691▕         catch (Exception $e) {    ➜ 692▕             throw new QueryException(      693▕                 $query, $this->prepareBindings($bindings), $e      694▕             );      695▕         }      696▕     }      1   C:\Users\ekind\Desktop\LaravelTest\FirstProject\vendor\laravel\framework\src\Illuminate\Database\Connectors\Connector.php:70        PDOException::("SQLSTATE[HY000] [2002] No connection could be made because the target machine actively refused it")      2   C:\Users\ekind\Desktop\LaravelTest\FirstProject\vendor\laravel\framework\src\Illuminate\Database\Connectors\Connector.php:70        PDO::__construct()  

.env file:

DB_CONNECTION=mysql  DB_HOST=localhost  DB_PORT=3306            DB_DATABASE=firstprojectdb        DB_USERNAME=root    DB_PASSWORD=Ekin1234  

I tried to change my DB_HOST to "localhost" but it didn't work.

When I want to show databases it shows those;

mysql> show databases;  +--------------------+  | Database           |  +--------------------+  | firstproject       |  | firstprojectdb     |  | information_schema |  | mysql              |  | performance_schema |  | sakila             |  | sys                |  | world              |  +--------------------+  

But even if I create tables like these it didn't see

public function up()    {      Schema::create('posts', function (Blueprint $table) {          $table->increments('id'); // İdentity(1,1) İd int          $table->string('title');  // title (Create a new string column on the table.)          $table->mediumText('body'); // body  (Create a new medium text column on the table.)          $table->timestamps();      });  

As you see;

mysql> show tables;  Empty set (0.00 sec)  

I even delete the extension of "extension=pdo_mysql" but it didn't work to me.

PLS HELP ME GUYS

How to get the count of columns in each column family in HBASE

Posted: 18 Sep 2021 08:39 AM PDT

How to get the count of columns in each column family in HBASE.

I have a table_name T, and columnfamily 'f' and 'm', have 10 columns each. how to count the number of columns in each column family.

how to plot labels TFRecords in histogram

Posted: 18 Sep 2021 08:39 AM PDT

Hello i have many files of TFRecords. i use python tensorflow and want to plot in one histogram all labels. TFRecords is pair of (image,label) so how i can extract all the labels ? i have try to extract labels and have success plot several batches

all_label = []  for image, label in ds_train.take(10):      all_label.append(label)     sns.distplot(all_label)  

enter image description here

RecognitionService: call for recognition service without RECORD_AUDIO permissions; extending RecognitionService

Posted: 18 Sep 2021 08:38 AM PDT

I am trying to extend RecognitionService to try out different Speech to Text services other than given by google. In order to check if SpeechRecognizer initializes correctly dummy implementations are given now. I get "RecognitionService: call for recognition service without RECORD_AUDIO permissions" when below check is done inside RecognitionService#checkPermissions().

   if (PermissionChecker.checkCallingPermissionForDataDelivery(this,                      android.Manifest.permission.RECORD_AUDIO, packageName, featureId,                      null /*message*/)                               == PermissionChecker.PERMISSION_GRANTED) {                  return true;              }   

Note that checked similar reported issue and I verified that inside my extended service, this permission exists when checked with below.

if (ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED)   

Android manifest file:

<?xml version="1.0" encoding="utf-8"?>  <manifest xmlns:android="http://schemas.android.com/apk/res/android"      package="com.example.texttospeech">      <uses-permission android:name="android.permission.RECORD_AUDIO"/>      <uses-permission android:name="android.permission.INTERNET"/>      <uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>      <queries>          <package android:name="com.google.android.googlequicksearchbox"/>      </queries>        <application          android:name=".App"          android:allowBackup="true"          android:icon="@mipmap/ic_launcher"          android:label="@string/app_name"          android:roundIcon="@mipmap/ic_launcher_round"          android:supportsRtl="true"          android:theme="@style/AppTheme">          <activity android:name=".MainActivity">              <intent-filter>                  <action android:name="android.intent.action.MAIN" />                    <category android:name="android.intent.category.LAUNCHER" />              </intent-filter>          </activity>          <service android:name=".SampleSpeechRecognizerService"              android:exported="true"              android:foregroundServiceType="microphone"              android:permission="android.permission.RECORD_AUDIO">              <intent-filter>                  <action android:name="android.speech.RecognitionService" />                  <category android:name="android.intent.category.DEFAULT" />              </intent-filter>          </service>      </application>    </manifest>  

MainActivity

package com.example.texttospeech;    import android.Manifest;  import android.content.ComponentName;  import android.content.Intent;  import android.content.pm.PackageManager;  import android.content.pm.ResolveInfo;  import android.os.Build;  import android.os.Bundle;  import android.speech.RecognitionListener;  import android.speech.RecognitionService;  import android.speech.RecognizerIntent;  import android.speech.SpeechRecognizer;  import android.util.Log;  import android.view.MotionEvent;  import android.view.View;  import android.widget.EditText;  import android.widget.ImageView;  import android.widget.Toast;    import androidx.appcompat.app.AppCompatActivity;  import androidx.core.app.ActivityCompat;  import androidx.core.content.ContextCompat;    import java.util.ArrayList;  import java.util.List;  import java.util.Locale;    public class MainActivity extends AppCompatActivity {      private static final String TAG = AppCompatActivity.class.getSimpleName();      private Intent speechRecognizerIntent;      public static final int PERMISSION_REQUEST_RECORD_AUDIO = 1;      private SpeechRecognizer speechRecognizer;      private EditText editText;      private ImageView micButton;        @Override      protected void onCreate(final Bundle savedInstanceState) {          super.onCreate(savedInstanceState);          setContentView(R.layout.activity_main);            editText = findViewById(R.id.text);          micButton = findViewById(R.id.button);            if (ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {              checkPermission();          } else {              configureSpeechListener();          }            boolean isSupported = SpeechRecognizer.isRecognitionAvailable(this);            if (!isSupported) {              Log.i(TAG, "Device has no Speech support");          }            micButton.setOnTouchListener(new View.OnTouchListener() {              @Override              public boolean onTouch(View view, MotionEvent motionEvent) {                  if (motionEvent.getAction() == MotionEvent.ACTION_UP) {                      speechRecognizer.stopListening();                  }                  if (motionEvent.getAction() == MotionEvent.ACTION_DOWN) {                      micButton.setImageResource(R.drawable.ic_mic_black_24dp);                      speechRecognizer.startListening(speechRecognizerIntent);                  }                  return false;              }          });      }        private void configureSpeechListener() {          //speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this);            ComponentName currentRecognitionCmp = null;            List<ResolveInfo> list = getPackageManager().queryIntentServices(                  new Intent(RecognitionService.SERVICE_INTERFACE), 0);          for (ResolveInfo info : list) {              currentRecognitionCmp = new ComponentName(info.serviceInfo.packageName, info.serviceInfo.name);          }          speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this, currentRecognitionCmp);            speechRecognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);          speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);          speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());            speechRecognizer.setRecognitionListener(new SampleSpeechRecognitionListener());      }        @Override      protected void onDestroy() {          super.onDestroy();          speechRecognizer.destroy();      }        private void checkPermission() {          if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {              ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.RECORD_AUDIO}, PERMISSION_REQUEST_RECORD_AUDIO);          }      }        @Override      public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {          super.onRequestPermissionsResult(requestCode, permissions, grantResults);          switch (requestCode) {              case PERMISSION_REQUEST_RECORD_AUDIO:                  // If request is cancelled, the result arrays are empty.                  if (grantResults.length > 0 &&                          grantResults[0] == PackageManager.PERMISSION_GRANTED) {                      configureSpeechListener();                  } else {                      Toast.makeText(this, "Microphone permission required to proceed", Toast.LENGTH_SHORT).show();                  }                  return;          }      }        private class SampleSpeechRecognitionListener implements RecognitionListener {          @Override          public void onReadyForSpeech(Bundle params) {              Log.i("Sample", "ReadyForSpeech");          }            @Override          public void onBeginningOfSpeech() {              editText.setText("");              editText.setHint("Listening...");              Log.i("Sample", "onBeginningOfSpeech");          }            @Override          public void onRmsChanged(float rmsdB) {            }            @Override          public void onBufferReceived(byte[] buffer) {            }            @Override          public void onEndOfSpeech() {              Log.i("Sample", "onEndOfSpeech");          }            @Override          public void onError(int error) {              Log.e("Sample", "Error occured.." + error);          }            @Override          public void onResults(Bundle bundle) {              Log.i("Sample", "onResults");              micButton.setImageResource(R.drawable.ic_mic_black_off);              ArrayList<String> data = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);              editText.setText(data.get(0));              Log.i("Sample", data.get(0));          }            @Override          public void onPartialResults(Bundle partialResults) {              Log.i("Sample", "onPartialResults");          }            @Override          public void onEvent(int eventType, Bundle params) {              Log.i("Sample", "onEvent");          }      }  }  

SampleSpeechRecognizerService

package com.example.texttospeech;    import static com.example.texttospeech.App.CHANNEL_ID;    import android.app.Notification;  import android.content.Intent;  import android.os.Bundle;  import android.os.RemoteException;  import android.speech.RecognitionService;  import android.speech.SpeechRecognizer;  import android.util.Log;    import java.util.ArrayList;    public class SampleSpeechRecognizerService extends RecognitionService {        private RecognitionService.Callback mListener;      private Bundle mExtras;        @Override      public void onCreate() {          super.onCreate();          Log.i("Sample", "Service started");          startForeground(new Intent(),1,1);      }          private int startForeground(Intent intent, int flags, int startId) {          Notification notification = new Notification.Builder(this, CHANNEL_ID)                  .setContentTitle("Speech Service")                  .setContentText("Speech to Text conversion is ongoing")                  .setSmallIcon(R.drawable.ic_android)                  .build();          startForeground(1, notification);            return START_NOT_STICKY;      }        @Override      public void onDestroy() {          super.onDestroy();          Log.i("Sample", "Service stopped");      }        @Override      protected void onStartListening(Intent recognizerIntent, Callback listener) {          mListener = listener;          Log.i("Sample", "onStartListening");          mExtras = recognizerIntent.getExtras();          if (mExtras == null) {              mExtras = new Bundle();          }          onReadyForSpeech(new Bundle());          onBeginningOfSpeech();      }        @Override      protected void onCancel(Callback listener) {          Log.i("Sample", "onCancel");          onResults(new Bundle());      }        @Override      protected void onStopListening(Callback listener) {          Log.i("Sample", "onStopListening");          onEndOfSpeech();      }        protected void onReadyForSpeech(Bundle bundle) {          try {              mListener.readyForSpeech(bundle);          } catch (RemoteException e) {              // Ignored          }      }        protected void afterRecording(ArrayList<String> results) {          Log.i("Sample", "afterRecording");          for (String item : results) {              Log.i("RESULT", item);          }      }        protected void onRmsChanged(float rms) {          try {              mListener.rmsChanged(rms);          } catch (RemoteException e) {              // Ignored          }      }        protected void onResults(Bundle bundle) {          try {              mListener.results(bundle);          } catch (RemoteException e) {              // Ignored          }      }        protected void onPartialResults(Bundle bundle) {          try {              mListener.partialResults(bundle);          } catch (RemoteException e) {              // Ignored          }      }        protected void onBeginningOfSpeech() {          try {              mListener.beginningOfSpeech();          } catch (RemoteException e) {              // Ignored          }      }        protected void onEndOfSpeech() {          try {              mListener.endOfSpeech();          } catch (RemoteException e) {              // Ignored          }            ArrayList<String> results = new ArrayList<>();          results.add("1");          results.add("2");          results.add("3");            Bundle bundle = new Bundle();          bundle.putStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION, results);            afterRecording(results);      }        protected void onBufferReceived(byte[] buffer) {          try {              mListener.bufferReceived(buffer);          } catch (RemoteException e) {              // Ignored          }      }  }  

I am running on Android 11 in Google Pixel 4XL. As there are privacy restrictions for microphone access in Android 11, ran the extended service as foreground service as well. Still getting same error. Anyone faced this issue with Android 11? Thanks in advance

telegram bot webhook self-signed certificate problem

Posted: 18 Sep 2021 08:39 AM PDT

I have an static ip address and I want to use it as Telegram bot webhook. In the other words, my bot application runs on my local system, and I configured my modem to forward requests from that ip address to my local server:port. This method is working for other applications run on my local system, but I have problem with ssl.

For setting webhook, first I generate a Self-signed certificate in this way:

openssl req -newkey rsa:2048 -sha256 -nodes -keyout PRIVATE.key -x509 -days 365 -out PUBLIC.pem -subj "/C=NG/ST=Lagos/L=Lagos/O=YOUR_NAME_OR_COMPANY_NAME/CN=<MY_IP:PORT> OR <MY_IP>"

This generates PUBLIC.pem file and I send it to setWebhook api. The result is ok, but I always get below result from getWebhookInfo method:

{     "ok":true,     "result":{        "url":".../bot/receive",        "has_custom_certificate": true,        "pending_update_count":15,        "last_error_date":1609911454,        "last_error_message":"SSL error {error:14095044:SSL routines:ssl3_read_n:internal error}",        "max_connections":40,        "ip_address":"..."     }  }  

Also in my applicaition, I have enabled ssl supprot with .p12 equivalent of .pem certificate, but not working. Is there any way for doing this? Thanks in advance.

Angular Schematics - Apply thounsand of change on tree cause error Maximum call stack size exceeded

Posted: 18 Sep 2021 08:38 AM PDT

I try to migrate a big AngularJs project to Angular. I found the Angular Schematics to be a good way to automate some tasks.

My first task is to create components for each folder and I have about 1200 components to create.

My function look like this:

return (tree: Tree, _context: SchematicContext) => {      const directory = tree.getDir('my_path/pages');        const rules: Rule[] = [];      directory.visit((filePath) => {          if (!filePath.endsWith('config.ts')) {              return;          }            const parsedPath = parseName(dirPath, basename(filePath).split('.')[0]);          options = {              path: parsedPath.path,              name: parsedPath.name          };            const templateSource = apply(              url('./files'),              [                  applyTemplates({                      ...strings,                      ...options                  }),                  move(parsedPath.path)              ]          );            const rule = mergeWith(templateSource);          rules.push(rule);      });        return chain(rules);  };  

So I end up to chain 1200 rules to my tree and this cause the error Maximum call stack size exceeded. How can I effectively apply model-based component creation?

*The code is working for a smaller project like 200 component creation.

I created an issue on the Github repo of Angular-Cli

How to zip more than 4 publishers

Posted: 18 Sep 2021 08:37 AM PDT

I'm using Swift Combine for my API requests. Now I'm facing a situation where I want to have more than 4 parallel requests that I want to zip together. Before I had exactly 4 requests that I zipped together using Zip4() operator. I can imagine that you do the zipping in multiple steps but I don't know how to write the receiveValue for it.

Here's a simplification of my current code with 4 parallel requests:

    Publishers.Zip4(request1, request2, request3, request4)          .sink(receiveCompletion: { completion in              // completion code if all 4 requests completed          }, receiveValue: { request1Response, request2Response, request3Response, request4Response in              // do something with request1Response              // do something with request2Response              // do something with request3Response              // do something with request4Response          }      )          .store(in: &state.subscriptions)  

Taking screenshot on Emulator from Android Studio

Posted: 18 Sep 2021 08:39 AM PDT

I know this probably might be the silliest question but still, I don't know how to take a screenshot of Emulator via Android Studio. I recently switched from Eclipse to Android Studio and I could not find it anywhere, I tried to search on web too but no help.

No comments:

Post a Comment