Thursday, June 3, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


Assuming a Linux box has a huge disk cache, is it significantly beneficial to performance for applications to buffer writes?

Posted: 03 Jun 2021 08:26 AM PDT

$ free -wh                total        used        free      shared     buffers       cache   available  Mem:           125G         30G         59G        1.3G        837M         35G         92G  Swap:          7.6G        700K        7.6G  

I understand that applications can buffer disk writes for various reasons not necessarily for performance, but assuming one is going to write complete "records" 100 times per second, of perhaps 1k in size, in one call to the os level write function (this one)

#include <unistd.h>  ssize_t write(int fd, const void *buf, size_t count);  

Should these be accumulated into near 4k blocks before writing or is there little benefit in doing that?

I'm talking about significant benefits, worth the extra effort, risk and potential data loss of buffering multiple records internally.

Does the Linux disk cache "abstract away" the need for apps to do their own buffering?

I know there are always exceptions to blanket rules and your answers can bring that up but I'm talking mainly about applications that need to write sequential log type files perhaps 100 per second.

How to calculate inputted values using a while loop c++?

Posted: 03 Jun 2021 08:25 AM PDT

How do you use a while loop only to add multiple values with a given point when to exit the loop and display the tallied amounts.

Note the following example. Test your program by entering 7 for the number of items and the following values for the calories: 7 - 120 60 150 600 1200 300 200

If your logic is correct, the following will be displayed: Total calories eaten today = 2631

Below is what I have written, what I require is understanding the calculation for the total calories.

#include <iostream>    using namespace std;    int main()  {      int numberOfItems;      int count = 1; //loop counter for the loop      int caloriesForItem;      int totalCalories;      cout << "How many items did you eat today? ";      cin >> numberOfItems;      cout << "Enter the number of calories in each of the "           << numberOfItems << " items eaten:  " << endl;        while (count <= numberOfItems) // count cannot be more than the number of items      {          cout << "Enter calorie: ";          cin >> caloriesForItem;          totalCalories = ; //?          ++count;      }      cout << "Total calories  eaten today  = " << totalCalories;        return 0;  }  

How do I store a value, then add on that value, repeatedly until the program reaches a point to exit as per the count value

Blender mesh shows black lines after extruding

Posted: 03 Jun 2021 08:25 AM PDT

I am trying to extrude a simple 2d mesh, but the extruded region shows black lines after extruding. I have no hidden or overlapping meshes as far as i know. Is there a way to fix this?

Blender issue

Thanks for your help!

How can neglect the value which exist in db django

Posted: 03 Jun 2021 08:25 AM PDT

I am try to show only those saloon when is still not link to any user, but my query set return those saloon which is already linked with user

Model.py

class SaloonRegister(models.Model):      saloon_name = models.CharField(max_length=50)      owner_name = models.CharField(max_length=30)      address = models.CharField(max_length=30)      contact_no = models.BigIntegerField()      is_active = models.BooleanField(default=False)      class SignUp(models.Model):      user = models.ForeignKey(User, on_delete=models.CASCADE)      saloon = models.ForeignKey(SaloonRegister, on_delete=models.CASCADE)      contact_no = models.BigIntegerField()  

view.py

class AllSaloon(TemplateView):      template_name = 'allSaloon.html'        def get(self, request, *args, **kwargs):              saloon = SignUp.objects.filter(saloon_id__isnull=False)          return render(request, self.template_name, {'saloon': saloon})  

Excludefolders from solrfal typo3 make configurable

Posted: 03 Jun 2021 08:25 AM PDT

I am using typo3 7, solr and solrfal for indexing files. The folders to exclude will work only if we give the typoscript inside the typoscript file. Can't we make this settings configurable? Say for eg: directly in typoscript object browser like in the image shown

enter image description here

When the configurations is given like this, it is not considering the excludeFolders. Only considers this value, when given in the typoscript present in the file.

Can we make it configurable?

Thank you

Around annotion executed twice using WebFlux

Posted: 03 Jun 2021 08:25 AM PDT

I'm facing a weird behaviour while using AOP with AspectJ.

Basically the @Around method its called either once either twice and while trying to debugging I can't find the reason why it's being executing twice (I mean what triggers the second execution of the method)

here is some code :

    @Aspect      @Slf4j      public class ReactiveRedisCacheAspect{      @Pointcut("@annotation(com.myluxottica.sdk.cache.aop.annotations.ReactiveRedisCacheable)")          public void cacheablePointCut() {}               @Around("cacheablePointCut()")       public Object cacheableAround(final ProceedingJoinPoint proceedingJoinPoint) {           //DO SOME BUSINESS LOGIC TO RETRIEVE OR SAVE ON REDIS            }      }         @Target({ElementType.METHOD})     @Retention(RetentionPolicy.RUNTIME)     @Documented     public @interface ReactiveRedisCacheable {         String key();         String cacheName();         long duration() default 1L;       }  

So far I would have expected the cacheableAround would be executed only once, but what happens its a bit weird, if the object is present on redis the method is executed only once but if is not present the method is executed twice which it doesn't make sense, moreover it should be the business logic to manage what to do inside the method.

Thanks in advance!

Reset/clear viewmodel or livedata

Posted: 03 Jun 2021 08:25 AM PDT

I am following the one-single-activity app pattern advised by Google, so if I want to share data between Fragments I have to share a ViewModel whose owner must be the parent Activity. So, the problem becomes because I want to share data between only two Fragments, independently from the others.

Imagine I have MainFragment, CreateItemFragment and ScanDetailFragment. So, from first one I navigate to CreateItemFragment in which whenever I press a button I navigate to ScanDetailFragment in order to scan a barcode and, in consequence, through a LiveData object inside the ViewModel I can get the scanned value back into the CreateItemFragment once ScandDetailFragment finishes. The problem becomes when I decide to cancel the creation of the item: I go back to the `MainFragment' and because the ViewModel's owner was the Activity's lifecycle, once I go again into CreateItemFragment, the previously scanned value is still there.

Any idea to reset that ViewModel?

GroupBy (SQL vs MySQL)

Posted: 03 Jun 2021 08:25 AM PDT

I have a similar issue as this person: GROUP BY (MySQL vs SQL server)

SELECT       SubscriberKey,      EmailAddress,      MIN(CreatedDate) AS CreateDate,  FROM [_ListSubscribers]  WHERE EmailAddress = 'email@address.com'  AND STATUS = 'active'  GROUP BY EmailAddress  

How do I see what the SubscriberKey is that is tied to the email address with the oldest record? The SubscriberKey is a TEXT field. I tried using MIN(SubscriberKey) but it doesn't return the correct value. And adding it in the GroupBy Clause also fails to return the correct value.

NLTK: Cooccurence Matrix function

Posted: 03 Jun 2021 08:25 AM PDT

Brain fog

the dataframe im working with has been preprocessed and now contains a series of reviews with tokenized data:

tokenized tagged lower_tagged

I have created two lists with the top 1000 frequented occurences

lists:

cent - (aka centre) Nouns

cont - (aka context) verbs or adjectives

What I want to do with these list of vocabularies is; create a co-occurrence matrix where, for each cent word, keep track of how many of the cont words co-occur with it.

for example: if i choose cent word: 'library'

'a *large* -library- with *plentiful*  *resources*  

That being; large, plentiful and resources

Im trying to wrap this in a function; get_coocurrences(df, centre, context)

-reads in df

-reads my two lists cent and cont

-returns a dictionary of dictionaries in the form in the example above.

-decide how to deal with exceptional cases- if centre and context words being part of the vocabulary as they may be frequent both as a noun and as a verb.

Return list as concatenated string

Posted: 03 Jun 2021 08:25 AM PDT

I have input data as test="122 drshshs 000 dkkdkdk 200"

<#list test?split(" ") as curr>  ${curr}  </#list>  

In o/p i m getting output as :

122  drshshs  000  dkkdkdk  200  

Is there any freemarker short hand function that can directly string give o/p as below instead of looping and adding each string to a variable:

122,drshshs,000,dkkdkdk,200  

I am having issue with my pandas to_csv function

Posted: 03 Jun 2021 08:25 AM PDT


I am using Python 3.8 and having issues in my pandas. I am trying to split a string while in interpreter, it is going well, but in my CSV file the data entered by pandas is wrong.
My code is:

import pandas as pd    l = []  t = "Mahindra KUV100 NXT"  m = t.split("KUV100")[0].strip()  l += m  dict1 = {'Name': l}  df = pd.DataFrame(dict1)  df.to_csv('file.csv')  

Output in interpreter:

'Mahindra'

Output in CSV file:

      Name  0     KUV100  

powershell - how to find and rename a specific field in json file

Posted: 03 Jun 2021 08:26 AM PDT

I have the following json file

{    "Encryptme": false,    "Values": {      "widget_STORAGE": "somevalue",    },    "otherdata": {}  }  

I need to search for the field that has _storage and rename it. the final json should look like this:

{    "Encryptme": false,    "Values": {      "NewWidgetName": "somevalue",    },    "otherdata": {}  }  

So far, I've figured out how to grab the json content, I think:

$content = Get-Content 'mytest.json' -raw | ConvertFrom-Json  echo $content    $content.update | % {if(???? -like '*_STORAGE'){???}}  $content | ConvertTo-Json -depth 32| set-content 'mytest.json'  

But I don't know how to look for any field with the _STORAGE. Presently try to google regex in powershell.

Thanks.

how can we generate tuples from data Frames columns?

Posted: 03 Jun 2021 08:25 AM PDT

I have a dataframe like this:

    top1    top2    top3  0   0       13      20  1   1       14      23  2   2       11      25  3   3       13      20  4   4       10      21  5   5       19      13  

I want to generate to generate tuples like that: [("0", "13"), ("0", "20"), ("1", "14), ("2", "11"), ("2", "25"), ....].

How can we do that if possible?

Scraping a website who always has the same URL using Selenium

Posted: 03 Jun 2021 08:26 AM PDT

I am currently scraping a certain website, but the issue is that this website always has the same URL, which is not allowing me to scrape it correctly. I am relatively new to Selenium and I'm currently trying to figure out how I could manage to scrape the given site. The site is : " https://fcraonline.nic.in/fc3_amount.aspx ". I am looking to scrape Each district in each State in each year. This is the code that I have written so far :

from selenium import webdriver  from bs4 import BeautifulSoup  import pandas as pd    driver = webdriver.Chrome(executable_path = "./chromedriver.exe")    driver.get("https://fcraonline.nic.in/fc3_amount.aspx")    # find_elements_by_xpath returns an array of selenium objects.  titles_element = driver.find_elements_by_xpath("adiv[@class='col-md-12']")  # use list comprehension to get the actual repo titles and not the selenium objects.  titles = [x.text for x in titles_element]  # print out all the titles.  print('titles:')  print(titles, '\n')  

If someone could guide me/teach me to solve the issue that would be great. I thank you all for your time.

Plot graph with vertical labels on the x-axis Matplotlib

Posted: 03 Jun 2021 08:25 AM PDT

To continue my research on how to plot a xml file and continue checking my code, I first applied a division to signal.attrib ["Value"], since it shows some string values ​​and what I'm interested in is the numeric values.

And as you can see below, I relied on the documentation for Pandas and SQL Compare (enter link description here):

 import xml.etree.ElementTree as ET   import pandas as pd   from matplotlib import pyplot as plot     def transformData (rootXML):      print ("File:", rootXML)      file_xml = ET.parse (rootXML)      data_XML = [          {"Name": signal.attrib ["Name"],           # "Value": signal.attrib ["Value"]           "Value": int (signal.attrib ["Value"]. Split ('') [0])           } for the signal in file_xml.findall (".//Signal")      ]        signals_df = pd.DataFrame (XML_data)        signals_df [(signals_df ["Name"] == 'Status') |                 (df_ signals ["Name"] == 'SetDSP') |                 (df_ signals ["Name"] == 'HMI') |                 (df_ signals ["Name"] == 'Delay') |                 (df_ signals ["Name"] == 'AutoConfigO_Rear') |                 (df_ signals ["Name"] == 'AutoConfigO_Front') |                 (df_ signals ["Name"] == 'AutoConfigO_Drvr') |                 (df_ signals ["Name"] == 'AutoConfigO_Allst') |                 (df_ signals ["Name"] == 'RUResReqstStat') |                 (df_ signals ["Name"] == 'RUReqstrSystem') |                 (df_ signals ["Name"] == 'RUSource') |                 (df_ signals ["Name"] == 'DSP') |                 (df_ signals ["Name"] == 'CurrTUBand') |                 (df_ signals ["Name"] == 'DrStatDrv') |                 (df_ signals ["Name"] == 'PW_Chim') |                 (df_ signals ["Name"] == 'BtnID') |                 (signals_df ["Name"] == 'Cod_BtnID') |                 (df_ signals ["Name"] == 'SetVol') |                 (signals_df ["Name"] == 'Lock_Stat')]. plot (kind = 'line', rot = 0)      plot.title ('Changing signals every time they occur')      plot.xlabel ('Signal name')      plot.ylabel ('Signal value')      plot.show ()      plot.clf ()      plot.close ()  

I ran it with a complete xml file with lots of signals like the one found in this link: enter link description here (I'm putting it here because it does not allow me to attach the code due to the length of the xml)

So once I compiled all the data, I ran the chart diagram and it comes out like this (Not all labels are in the file):

enter image description here

Actually what I have to use is something like the "switchpoint trace" graph near the bottom of this page: enter link description here But for now I'm working on it and it serves me in a graph of lines.

But how to divide it on the x-axis by the names of the labels ('Name') instead of the numbers 0,20,40 etc, i.e. how to make all the names appear on the graph and put them vertically? Is it possible to do this? Or I don't know... Could Matplotlib convert the labels to numbers and then paint the description of the numbers next to the labels next to the graph to make the graph more understandable? Thanks in advance.

Python Dataframe Merge Boolean Columns Data into One Column Data

Posted: 03 Jun 2021 08:25 AM PDT

I have a data frame with multiple columns. I want to merge columns into one column data.

My code:

df =        A   foo   goo  0   10   Y     NaN  1   40   NaN   Y  2   80   Y     NaN  

Expected answer:

df =        A   Group     0   10   foo       1   40   goo     2   80   foo   

My approach:

df['foo'].replace('Y','foo',inplace=True)  df['goo'].replace('Y','goo',inplace=True)  df['Group'] = df['foo']+df['goo']  df =            A   foo   goo   Group  0   10   foo   NaN   NaN  1   40   NaN   goo   NaN  2   80   foo   NaN   NaN  

In my answer, all elements turn into NaN.

Flutter - Listview scroll behind another widget

Posted: 03 Jun 2021 08:25 AM PDT

I am trying an appointment view in Flutter. All widgets initially placed ok but when I scroll the listview (list of available hours), the scrolling animation interferes with the element placed on top of it (the calendar). I know that I could use a dropdown but I prefer a list. I have tried several ideas from StackOverflow but still cannot make it work correctly

Images for reference:

how it looks when started

after start scrolling

    //simple calender  import 'package:flutter/material.dart';  import 'package:hswebapp/days.dart';  import 'package:hswebapp/hours.dart';  import 'package:table_calendar/table_calendar.dart';    void main() {    runApp(MyApp());  }    class MyApp extends StatelessWidget {    // This widget is the root of your application.    @override    Widget build(BuildContext context) {      return MaterialApp(        title: 'Appointment view',        theme: ThemeData(            primarySwatch: Colors.purple,        ),        home: MyHomePage(title: 'Appointment view'),      );    }  }    class MyHomePage extends StatefulWidget {    MyHomePage({Key? key, required this.title}) : super(key: key);      final String title;      @override    _MyHomePageState createState() => _MyHomePageState();  }    class _MyHomePageState extends State<MyHomePage> {    CalendarFormat _calendarFormat = CalendarFormat.twoWeeks;    DateTime _focusedDay = DateTime.now();    DateTime? _selectedDay;    DateTime kFirstDay = DateTime.utc(2021, 1, 15);    DateTime kLastDay = DateTime.utc(2022, 1, 20);      @override    Widget build(BuildContext context) {      return Scaffold(        resizeToAvoidBottomInset: false,          appBar: AppBar(            title: Text('TableCalendar - Basics'),          ),          body: SafeArea(            child: Column(              children: <Widget>[              daysCalender(),              Text("Available Hours"),              const SizedBox(                  height: 10,                ),            SingleChildScrollView(            child: Container(              child: Column(                crossAxisAlignment: CrossAxisAlignment.start,                children: <Widget>[                  SizedBox(                    height: 150,                    child:               AvailableHours(),                  )            ],          ) // your column1        ),      ),              TextFormField(                decoration: InputDecoration(                    border: UnderlineInputBorder(), labelText: 'Name'),              ),              TextFormField(                decoration: InputDecoration(                    border: UnderlineInputBorder(), labelText: 'Phone'),              ),              TextButton(                style: ButtonStyle(                  foregroundColor: MaterialStateProperty.all<Color>(Colors.blue),                ),                onPressed: () {},                child: Text('Confirm'),              )            ],          ),          )           );    }  }  

Grpc C++: How to wait until a unary request has been sent?

Posted: 03 Jun 2021 08:25 AM PDT

I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so

response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(          grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(              channel.get(), completion_queue, rpc_method, &client_context_, request, true          )  );  response_reader_->Finish(      response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)  );  // Set a breakpoint here  

where all of the arguments are valid.

I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.

This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.

Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.

You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.

EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request

Is there a way to get a list of Youtube videos sorted by view count

Posted: 03 Jun 2021 08:25 AM PDT

I am trying to collect a large list of Youtube's most watched videos for a data science application. I tried to use the Youtube API with the following query: https://www.googleapis.com/youtube/v3/search&order=viewCount&type=video&regionCode=US&key=API_KEY
but it does not seem to give me the same video ideas as in this list:
https://en.wikipedia.org/wiki/List_of_most-viewed_YouTube_videos

Could someone tell me how I should do it?

How do I fetch host and pass from file in linux

Posted: 03 Jun 2021 08:25 AM PDT

#!/bin/bash  hosts=(sarv savana simra punit)  pass=(sarva 1save xvyw23 asdwe87)    for i in "${!hosts[@]}"; do      sshpass -p "${pass[i]}" ssh-copy-id -f root@"${hostnames[i]}" -p 22  done  

Is it possible to fetch the password and hostname from a different file which consists of all hosts and their corresponding passwords in the following format:

host pass  sarv sarva  savana 1save  simra xvyw23  punit asdwe87   

I apologize for not describing it properly. The first word of each line in the file is the host-name and the second word is it's password.

Instead of writing hosts=(sarv savana simra punit) pass=(sarva 1save xvyw23 asdwe87) in the script.

Error: g++.exe: No such file or directory g++.exe: fatal error: no input files in Visual studio code?

Posted: 03 Jun 2021 08:25 AM PDT

There is no error in my code, and I have configured Mingw in Environment Variables but showing this error. I've created this file in Dev C++ and is running well in it. The Error is :

g++.exe: error: Calculator: No such file or directory  g++.exe: error: .cpp: No such file or directory  g++.exe: fatal error: no input files  compilation terminated.  

I have inserted the image for reference.
https://i.stack.imgur.com/Jdtlx.png

Files that I created in Visual studio code are running well and I've tried copying the code of this file in a new file and that ran. So should I do this with all the files I've created with Dev C++ or there is another method to resolve this issue?

How can I fix this grid in css?

Posted: 03 Jun 2021 08:25 AM PDT

I'm trying to do a simple grid for my website. I could almost successfully do what I wanted. I would like to fix the "Projects" that is the title of this grid, I would like to have it not in the same size as the other parts of the grid.

This is the result that I have: enter image description here

I would like to have something like this: enter image description here

HTML:

<div class="project-container">        <div className="title">          <h1>PROJECTS</h1>        </div>        <div className="project-one item">          <h2>Face Recognition</h2>          <button className="button button1">VIEW PROJECT</button>        </div>        <div className="project-two item">          <h2>Face Recognition</h2>          <button className="button button1">VIEW PROJECT</button>        </div>        <div className="project-three item">          <h2>Face Recognition</h2>          <button className="button button1">VIEW PROJECT</button>        </div>      </div>  

CSS:

.project-container {    width: 80%;    height: 80vh;    margin: auto;    display: grid;    border: 1px solid pink;    // grid-gap: 4em;    grid-template-columns: repeat(3, 1fr);    grid-template-areas:      "t t t"      "p1 p2 p3";  }    .item {    // width: 100%;    // height: 100%;    // transition: all 0.2s ease-in-out;    text-align: center;    color: white;    border: 1px solid pink;  }    .title {    grid-area: t;    text-align: center;    border: 1px solid pink;    // height: 10vh;  }  

gunicorn daemon failed to start error in digital ocean server

Posted: 03 Jun 2021 08:26 AM PDT

i am trying to deploy my django project on digital ocean, and this error stoping me from doing that

first let me show my directory structure

enter image description here

MY gunicorn.socket file

[Unit]  Description=gunicorn socket    [Socket]  ListenStream=/run/gunicorn.sock    [Install]  WantedBy=sockets.target  

my gunicorn.service file

    [Unit]  Description=gunicorn daemon  Requires=gunicorn.socket  After=network.target    [Service]  User=developer  Group=www-data  WorkingDirectory=/home/developer/myprojectdir  ExecStart=/home/developer/myprojectdir/myprojectenv/bin/gunicorn \            --access-logfile - \            --workers 3 \            --bind unix:/run/gunicorn.sock \            bharathwajan.wsgi:application    [Install]  WantedBy=multi-user.target  

when i try to check the status of gunicorn it throws me error like this

sudo systemctl status gunicorn  

gunicorn error

i inspected syslog and i founded

ubuntu-s-1vcpu-1gb-blr1-01 kernel: [35777.305422] [UFW BLOCK] IN=eth0 OUT= MAC="my_mac_address" SRC="my_ip" DST="servers_ip" LEN=40 TOS=0x08 PREC=0x20 TTL=240 ID=32200 PROTO=TCP SPT=49185 DPT=3389 WINDOW=1024 RES=0x00 SYN URGP=0   

How can I transpose values imported via QueryTables using VBA?

Posted: 03 Jun 2021 08:25 AM PDT

I have created a VBA code where it prompts for a CSV file and imports it. However, it imports values as a row. I need them to be imported as a column. How?

I tried setting the Range to $B$2:$B$10, but that did not help. I tried searching QueryTables for "transpose data on import" directive, but so far have not found one.

Code:

Sub Button_Import_Click()            Dim Ret            Ret = Application.GetOpenFilename("Nameplate File (*.txt), *.txt")            If Ret <> FALSE Then          With ActiveSheet.QueryTables.Add(Connection:= _               "TEXT;" & Ret, Destination:=Range("$B$2"))              .TextFileParseType = xlDelimited              .RefreshStyle = xlOverwriteCells              .TextFileCommaDelimiter = TRUE              .Refresh                        End With      End If        End Sub  

Data Sample

filename: text.txt

data: product,30,370 psi,80 lbs,description

PEAR Mail in php:apache Docker container

Posted: 03 Jun 2021 08:25 AM PDT

I have two servers. A Postfix mail server with Dovecot that, otherwise, works fine. I can send mail through it using Gmail client (So, yes there is a valid certificate installed there). The other server is my app server, which has the php:7.4-apache image running. I've installed the PEAR Mail library into that container/image, and I'm trying to send mail from the app through the mail server, but the PEAR Mail client keeps hanging up after it sends STARTTLS. Questions:

  1. Does the client need its own certificate explicitly installed and configured in order to start tls? If so, how is this done, and why don't I see anything written about that in my searches?

  2. What am I doing wrong?

Maillog on the mail server says only this:

Jun  1 17:36:46 Mimosa postfix/submission/smtpd[19141]: connect from unknown[10.0.0.14]  Jun  1 17:36:46 Mimosa postfix/submission/smtpd[19141]: lost connection after STARTTLS from unknown[10.0.0.14]  Jun  1 17:36:46 Mimosa postfix/submission/smtpd[19141]: disconnect from unknown[10.0.0.14]  

Debug output from the client says this:

[ehlo] Recv: 250-PIPELINING DEBUG: Recv: 250-SIZE 10240000 DEBUG: Recv: 250-VRFY DEBUG:   Recv: 250-ETRN DEBUG: Recv: 250-STARTTLS DEBUG: Recv: 250-ENHANCEDSTATUSCODES DEBUG:   Recv: 250-8BITMIME DEBUG: Recv: 250 DSN DEBUG: Send: STARTTLS DEBUG:   Recv: 220 2.0.0 Ready to start TLS DEBUG: Send: RSET DEBUG: Send: QUIT  

This is the code being used on the client:

<html><body>  <?php    var_dump(extension_loaded('openssl'));  //echo phpinfo();    include('/usr/local/lib/php/Mail.php');  $recipients = 'myemail@gmail.com'; //CHANGE  $headers['From']= 'noreply@mydomain.com'; //CHANGE  $headers['To']= 'myemail@gmail.com'; //CHANGE  $headers['Subject'] = 'Test message';  $body = 'Test message'; // Define SMTP Parameters  $params['host'] = '10.0.0.6';  $params['port'] = '587';  $params['auth'] = 'PLAIN';  $params['username'] = 'noreply'; //CHANGE  $params['password'] = 'password'; //CHANGE  $params['debug'] = 'true';     $mail_object =& Mail::factory('smtp', $params);    foreach ($params as $p){   echo "$p<br />";  }    // Send the message  $mail_object->send($recipients, $headers, $body);    ?>  </body></html>  

I've also tried the following code, which is essentially the same thing:

<?php    error_reporting(E_ALL ^ E_NOTICE ^ E_DEPRECATED ^ E_STRICT);    require_once "/usr/local/lib/php/Mail.php";    $host = "10.0.0.6";  $username = "noreply";  $password = "password";  $port = "587";  $to = "myemail@gmail.com";  $email_from = "noreply@mydomain.com";  $email_subject = "Testing Pear" ;  $email_body = "Sent using Pear install Mail" ;  $email_address = "noreply@domain.com";    $headers = array ('From' => $email_from, 'To' => $to, 'Subject' => $email_subject, 'Reply-To' => $email_address);  $smtp = Mail::factory('smtp', array ('host' => $host, 'port' => $port, 'auth' => true, 'username' => $username, 'password' => $password));  $mail = $smtp->send($to, $headers, $email_body);      if (PEAR::isError($mail)) {  echo("<p>" . $mail->getMessage() . "</p>");  var_dump($mail);  } else {  echo("<p>Message successfully sent!</p>");  }  ?>  

And received the following error:

" authentication failure [SMTP: STARTTLS failed (code: 220, response: 2.0.0 Ready to start TLS)]"  

I've tried both ports 25 and 587 with the same result, though only 587 should work. I've tried the auth parameter as true, false, and plain. Commenting out the auth parameter is rejected by the mail server which requires STARTTLS.

Changing this line in Mail.php

function auth($uid, $pwd, $method = '', $tls = true, $authz = '')  

to false, of course, does not work because the server requires a STARTTLS. Disabling TLS on both ends, might make it functional, but that doesn't solve the problem with TLS.

Please, don't tell me to just use PHPmailer.

Thank you.

spring-cloud-starter-sleuth + axon-tracing-spring-boot-starter =?

Posted: 03 Jun 2021 08:25 AM PDT

The title says it all. Is it possible to get spring-cloud-starter-sleuth working together with axon-tracing-spring-boot-starter?

current log output:

    2021-06-02 15:12:36.449  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [nio-8082-exec-5] o.a.m.interceptors.LoggingInterceptor    : Dispatched messages: [FindAll]  2021-06-02 15:12:36.455  INFO [,,] 14716 --- [ueryProcessor-9] o.a.m.interceptors.LoggingInterceptor    : Incoming message: [FindAll]  2021-06-02 15:12:36.542  INFO [,,] 14716 --- [ueryProcessor-9] o.a.m.interceptors.LoggingInterceptor    : [FindAll] executed successfully with a [ArrayList] return value  2021-06-02 15:12:36.668  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [nio-8082-exec-5] o.a.m.interceptors.LoggingInterceptor    : Dispatched messages: [KlassificeraApplikationCommand]  2021-06-02 15:12:36.724  INFO [,,] 14716 --- [mandProcessor-0] o.a.m.interceptors.LoggingInterceptor    : Incoming message: [KlassificeraApplikationCommand]  2021-06-02 15:12:36.785  INFO [,,] 14716 --- [mandProcessor-0] o.a.m.interceptors.LoggingInterceptor    : [KlassificeraApplikationCommand] executed successfully with a [null] return value  2021-06-02 15:12:36.785  INFO [,,] 14716 --- [mandProcessor-0] o.a.m.interceptors.LoggingInterceptor    : Dispatched messages: [ApplikationKlassificeradEvent]  2021-06-02 15:12:36.808 TRACE [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [nio-8082-exec-5] org.zalando.logbook.Logbook              : Incoming Request: null  

desired log output:

2021-06-02 15:12:36.449  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [nio-8082-exec-5] o.a.m.interceptors.LoggingInterceptor    : Dispatched messages: [FindAll]  2021-06-02 15:12:36.455  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [ueryProcessor-9] o.a.m.interceptors.LoggingInterceptor    : Incoming message: [FindAll]  2021-06-02 15:12:36.542  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [ueryProcessor-9] o.a.m.interceptors.LoggingInterceptor    : [FindAll] executed successfully with a [ArrayList] return value  2021-06-02 15:12:36.668  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [nio-8082-exec-5] o.a.m.interceptors.LoggingInterceptor    : Dispatched messages: [KlassificeraApplikationCommand]  2021-06-02 15:12:36.724  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [mandProcessor-0] o.a.m.interceptors.LoggingInterceptor    : Incoming message: [KlassificeraApplikationCommand]  2021-06-02 15:12:36.785  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [mandProcessor-0] o.a.m.interceptors.LoggingInterceptor    : [KlassificeraApplikationCommand] executed successfully with a [null] return value  2021-06-02 15:12:36.785  INFO [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [mandProcessor-0] o.a.m.interceptors.LoggingInterceptor    : Dispatched messages: [ApplikationKlassificeradEvent]  2021-06-02 15:12:36.808 TRACE [,2b2fc6a322588b0e,114b332e7847e95f] 14716 --- [nio-8082-exec-5] org.zalando.logbook.Logbook              : Incoming Request: null  

Best practise: How to implement plugins for spring jpa projects?

Posted: 03 Jun 2021 08:25 AM PDT

i always wondering what is the best practise to implement a plugin for a spring jpa project.

For example i would have such structure of java projects:

server-core

@Entity  public class User {        private String name;        }  

Now i would like to have separate java projects as plugins for the server project.

first-plugin

The first plugin is supposed to add another field to the user entity for example users age.

second-plugin

The second plugin is supposed to add a new table and a relation from the new table to the users table for example:

@Entity  public class Usergroup {            private Set<User> users;        }  

Now my question

The used database is sql server. Is there any good practise to realize such plugins without make changes to the server-core project so a plugin always can be appended and uncoupled without impacts for the server-core project? Is sql server the right database for such architecture or should i use a no-sql database instead?

mvn clean works fine in terminal but not from cron and bash file

Posted: 03 Jun 2021 08:25 AM PDT

mvn clean simply works fine from terminal. Even when I am executing the same from a bash file (.sh file) by double clicking, it working fine.

But when I trigger the same using crontab I'm getting an error mvn:command not found

bash(.sh) file have this code

#!/bin/bash   cd /Users/testautomation/Documents/Automation/TFS/Mem_Mobile   mvn clean  

Output of crontab -l

0 14 * * * /Users/testautomation/Documents/Automation/Schedule/Execute.sh  

Error

From testautomation@Tests-iMac.xxx.local  Wed Jun 12 14:44:01 2019  Return-Path: <testautomation@Tests-iMac.xxx.local>  X-Original-To: testautomation  Delivered-To: testautomation@Tests-iMac.xxx.local  Received: by Tests-iMac.xxx.local (Postfix, from userid 501)  id 0BE233001CB411; Wed, 12 Jun 2019 14:44:00 +1000 (AEST)  From: testautomation@Tests-iMac.xxx.local (Cron Daemon)  To: testautomation@Tests-iMac.xxx.local  Subject: Cron <testautomation@Tests-iMac> /Users/testautomation/Documents/Automation/Schedule/Execute.sh  X-Cron-Env: <SHELL=/bin/sh>  X-Cron-Env: <PATH=/usr/bin:/bin>  X-Cron-Env: <LOGNAME=testautomation>  X-Cron-Env: <USER=testautomation>  X-Cron-Env: <HOME=/Users/testautomation>  Message-Id: <20190612044401.0BE233001CB411@Tests-iMac.xxx.local>  Date: Wed, 12 Jun 2019 14:44:00 +1000 (AEST)    /Users/testautomation/Documents/Automation/Schedule/Execute.sh: line 3: mvn: command not found  

I have installed maven using homebrew.

mvn -version output :

Tests-iMac:~ testautomation$ mvn -version  Apache Maven 3.6.1 (d66c9c0b3152b2e69ee9bac180bb8fcc8e6af555; 2019-04-05T06:00:29+11:00)  Maven home: /usr/local/Cellar/maven/3.6.1/libexec  Java version: 1.8.0_212, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_212.jdk/Contents/Home/jre  Default locale: en_AU, platform encoding: UTF-8  OS name: "mac os x", version: "10.14.5", arch: "x86_64", family: "Mac"  

Kubernetes CoreDNS pods are endlessly restarting

Posted: 03 Jun 2021 08:25 AM PDT

I'm working on installing a three node kubernetes cluster on a CentOS 7 with flannel for a some time, however the CoreDNS pods cannot connect to API server and constantly restarting.

The reference HowTo document I followed is here.

What Have I Done so Far?

  • Disabled SELinux,
  • Disabled firewalld,
  • Enabled br_netfilter, bridge-nf-call-iptables,
  • Installed kubernetes on three nodes, set-up master's pod network with flannel default network (10.244.0.0/16),
  • Installed other two nodes, and joined the master.
  • Deployed flannel,
  • Configured Docker's BIP to use flannel default per-node subnet and network.

Current State

  • The kubelet works and the cluster reports nodes as ready.
  • The Cluster can schedule and migrate pods, so CoreDNS are spawned on nodes.
  • Flannel network is connected. No logs in containers and I can ping 10.244.0.0/24 networks from node to node.
  • Kubernetes can deploy and run arbitrary pods (Tried shell demo, and can access its shell via kubectl even if the container is on a different node.
    • However, since DNS is not working, they cannot resolve any IP addresses.

What is the Problem?

  • CoreDNS pods report that they cannot connect to API server with error:

    Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host  
  • I cannot see 10.96.0.0 routes in routing tables:

    default via 172.16.0.1 dev eth0 proto static metric 100   10.1.0.0/24 dev eth1 proto kernel scope link src 10.1.0.202 metric 101   10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink   10.244.1.0/24 dev docker0 proto kernel scope link src 10.244.1.1   10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1   10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink   172.16.0.0/16 dev eth0 proto kernel scope link src 172.16.0.202 metric 100  

Additional Info

  • Cluster init is done with the command kubeadm init --apiserver-advertise-address=172.16.0.201 --pod-network-cidr=10.244.0.0/16.
  • I have torn down the cluster and rebuilt with 1.12.0 The problem still persists.
  • The workaround in Kubernetes documentation doesn't work.
  • Problem is present and same both with 1.11-3and 1.12-0 CentOS7 packages.

Progress so Far

  • Downgraded Kubernetes to 1.11.3-0.
  • Re-initialized Kubernetes with kubeadm init --apiserver-advertise-address=172.16.0.201 --pod-network-cidr=10.244.0.0/16, since the server has another external IP which cannot be accessed via other hosts, and Kubernetes tends to select that IP as API Server IP. --pod-network-cidr is mandated by flannel.
  • Resulting iptables -L output after initialization with no joined nodes

    Chain INPUT (policy ACCEPT)  target     prot opt source               destination           KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */  KUBE-FIREWALL  all  --  anywhere             anywhere                Chain FORWARD (policy ACCEPT)  target     prot opt source               destination           KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */  DOCKER-USER  all  --  anywhere             anywhere                Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination           KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */  KUBE-FIREWALL  all  --  anywhere             anywhere                Chain DOCKER-USER (1 references)  target     prot opt source               destination           RETURN     all  --  anywhere             anywhere                Chain KUBE-EXTERNAL-SERVICES (1 references)  target     prot opt source               destination             Chain KUBE-FIREWALL (2 references)  target     prot opt source               destination           DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000    Chain KUBE-FORWARD (1 references)  target     prot opt source               destination           ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000    Chain KUBE-SERVICES (1 references)  target     prot opt source               destination           REJECT     udp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:dns has no endpoints */ udp dpt:domain reject-with icmp-port-unreachable  REJECT     tcp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:dns-tcp has no endpoints */ tcp dpt:domain reject-with icmp-port-unreachable  
  • Looks like API Server is deployed as it should

    $ kubectl get svc kubernetes -o=yaml  apiVersion: v1  kind: Service  metadata:    creationTimestamp: 2018-10-25T06:58:46Z    labels:      component: apiserver      provider: kubernetes    name: kubernetes    namespace: default    resourceVersion: "6"    selfLink: /api/v1/namespaces/default/services/kubernetes    uid: 6b3e4099-d823-11e8-8264-a6f3f1f622f3  spec:    clusterIP: 10.96.0.1    ports:    - name: https      port: 443      protocol: TCP      targetPort: 6443    sessionAffinity: None    type: ClusterIP  status:    loadBalancer: {}  
  • Then I've applied flannel network pod with

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml  
  • As soon as I apply the flannel network, CoreDNS pods start and start to give the same error:

    Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500\u0026resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host  
  • I've found out that flanneld is using the wrong network interface, and changed it in the kube-flannel.yml file before deployment. However the outcome is still the same.

Any help is greatly appreciated.

Does Firefox have an offline throttling mode/disable network feature?

Posted: 03 Jun 2021 08:25 AM PDT

I'm doing some front end work and I need to test how the program reacts when it loses a network connection. Firefox has a "Work offline" setting but that drops the connection for every tab -- I only want to disable the network connection for a single tab. Chrome has this with an "Offline" checkbox in the Network tab of the devtools that makes this really convenient.

This is what this feature looks like in Chrome:

Chrome screenshot

No comments:

Post a Comment