Thursday, December 9, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


SQL Server, consistency error that I can't fix

Posted: 09 Dec 2021 08:28 AM PST

DBCC CHECKDB on a user database returns this error

Msg 8992, Level 16, State 1, Line 1  Check Catalog Msg 3851, State 1: An invalid row (class=128,depid=65536,depsubid=0) was found in the system table sys.syssingleobjrefs (class=128).  

This error is in a user database, not in master! Internet is full of articles dedicated to this error when people try to check the master database, restored on a different server. That's not the case.

I tried all usual magic without any success.

DBCC CHECKDB WITH ALL_ERRORMSGS, NO_INFOMSGS  --error    ALTER DATABASE DBCopy SET SINGLE_USER WITH ROLLBACK IMMEDIATE    DBCC CHECKDB (DBCopy, REPAIR_ALLOW_DATA_LOSS)   --output reports the same error    DBCC CHECKDB WITH ALL_ERRORMSGS, NO_INFOMSGS  --error    --no hope but just in case  ALTER DATABASE DBCopy SET EMERGENCY    DBCC CHECKDB (DBCopy, REPAIR_ALLOW_DATA_LOSS)   --the same error reported    DBCC CHECKDB WITH ALL_ERRORMSGS, NO_INFOMSGS  --error    ALTER DATABASE DBCopy SET MULTI_USER  

We don't have a backup that is free from this error. This error appears completely harmless. The DB and apps work completely fine.

Microsoft SQL Server 2019 (RTM-CU14)

The DB is in-memory enabled.

Anything to try before I start to re-build the DB?

Logging user creation and deletion with auditd

Posted: 09 Dec 2021 08:11 AM PST

What is the best way to log/detect user creation on linux machines by using auditd? I was thinking about logging calls to /usr/bin/ and /usr/sbin/ and filtering out useradd, adduser, deluser and userdel respectively. I also thought about using the /etc/passwd rule as for every user creation /etc/passwd is changed? Are there better ways? Are there ways to create and delete users which wouldn't be audited by the above examples?

Oracle Cloud: Gradle in Compute Arm Instance: jcenter.bintray.com FORBIDDEN

Posted: 09 Dec 2021 08:01 AM PST

I'm trying to build some project on a free

Canonical-Ubuntu-20.04-aarch64-2021.10.15-0 (ARM machine)  

for example launching

./gradlew clean

On this project:

https://github.com/ReactiveX/RxJava

but I have some problems with the dependencies (seems all related to jcenter) for example

Could not GET 'https://jcenter.bintray.com/com/github/javaparser/javaparser-symbol-solver-model/3.13.5/javaparser-symbol-solver-model-3.13.5.jar'. Received status code 403 from server: Forbidden  

Actually doing a ping to the URL:

ping jcenter.bintray.com  

Is resolves as:

180.74.95.34.bc.googleusercontent.com (34.95.74.180)  

Same problem with a wget (using --debug), so this is not related to certificate of JVM (I think):

<p>Your client does not have permission to get URL <code>/com/github/javaparser/javaparser-symbol-solver-model/3.13.5/javaparser-symbol-solver-model-3.13.5.jar</code> from this server.  <ins>That's all we know.</ins>  ] done.  2021-12-09 10:05:50 ERROR 403: Forbidden.  

And it seems SSL is correctly handled:

Initiating SSL handshake.  Handshake successful; connected socket 3 to SSL handle 0x0000aaaafdff9d80  certificate:    subject: CN=*.bintray.com    issuer:  CN=GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1,O=DigiCert Inc,C=US  X509 certificate successfully verified and matches host jcenter.bintray.com  

I can reach the URL with no problem from my machine. VM Machine have a subnet with egress rule that allows all traffic for all ports.

I don't know IpTables but i try to see the configuration and I can't find nothing about a Rule to block this request (may be someone could help me on that).

Chain OUTPUT (policy ACCEPT)  num  target     prot opt source               destination           1    InstanceServices  all  --  anywhere             link-local/16   

I'm not a gradle expert nor a network expert so, i'm missing something i think....

BTW someone with the right reputation can create a tag oracle-cloud-infrastructure :)?

Any hint is welcome :)

Nagios - 'Error: Could not open config directory' but permissions are correct and selinux is permissive

Posted: 09 Dec 2021 08:34 AM PST

Trying to setup a separate directory to store my nagios configuration files and when I attempt to validate the configuration I get the below error.

Error: Could not open config directory '/usr/local/nagios/etc/objects/corp/contacts.cfg' for reading.

I believe the permissions are correct and I have selinux in permissive mode.

[root@NAGSRV objects]# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg    Nagios Core 4.4.6  Copyright (c) 2009-present Nagios Core Development Team and Community Contributors  Copyright (c) 1999-2009 Ethan Galstad  Last Modified: 2020-04-28  License: GPL    Website: https://www.nagios.org  Reading configuration data...     Read main config file okay...  Error: Could not open config directory '/usr/local/nagios/etc/objects/corp/contacts.cfg' for reading.  Error: Invalid max_check_attempts value for host 'localhost'  Error: Could not register host (config file '/usr/local/nagios/etc/objects/localhost.cfg', starting on line 21)     Error processing object config files!      ***> One or more problems was encountered while processing the config files...         Check your configuration file(s) to ensure that they contain valid       directives and data definitions.  If you are upgrading from a previous       version of Nagios, you should be aware that some variables/definitions       may have been removed or modified in this version.  Make sure to read       the HTML documentation regarding the config files, as well as the       'Whats New' section to find out what has changed.  

Permissions:

[root@NAGSRV objects]# ls -ld *  -rwxr-xr-x. 1 nagios nagcmd  6747 Dec  7 21:06 commands.cfg  -rwxr-xr-x. 1 nagios nagcmd  1794 Dec  7 21:35 contacts.cfg  -rwxr-xr-x. 1 nagios nagcmd  4777 Dec  7 21:06 localhost.cfg  -rwxr-xr-x. 1 nagios nagcmd  3001 Dec  7 21:06 printer.cfg  drwxr-xr-x. 2 nagios nagcmd    99 Dec  9 10:43 corp  -rwxr-xr-x. 1 nagios nagcmd  3484 Dec  7 21:06 switch.cfg  -rwxr-xr-x. 1 nagios nagcmd 12533 Dec  7 21:06 templates.cfg  -rwxr-xr-x. 1 nagios nagcmd  3512 Dec  7 21:06 timeperiods.cfg  -rwxr-xr-x. 1 nagios nagcmd  7630 Dec  9 10:17 windows.cfg        [root@NAGSRV objects]# ls -ld ./corp/*  -rwxr-xr-x. 1 nagios nagcmd  1245 Dec  9 10:43 ./corp/contacts.cfg  -rwxr-xr-x. 1 nagios nagcmd  1124 Dec  9 10:39 ./corp/hostgroups.cfg  -rwxr-xr-x. 1 nagios nagcmd  3809 Dec  9 10:45 ./corp/hosts.cfg  -rwxr-xr-x. 1 nagios nagcmd 10967 Dec  9 10:41 ./corp/hosts-service-template.cfg  

Groups:

[root@NAGSRV objects]# cat /etc/group | grep nag  nagios:x:1000:  nagcmd:x:1001:apache,nagios  

Selinux:

[root@NAGSRV objects]# getenforce  Permissive  

If I edit the nagios.cfg and remove my directory from the cfg_dir= I am able to validate.

How to use SMB Multichannel over VPN?

Posted: 09 Dec 2021 08:01 AM PST

According to this article SMB Multichannel can provide the following capability:

  • Increased Throughput - The file server can simultaneously transmit additional data by using multiple connections for high-speed network adapters or multiple network adapters.

A key requirement is:

  • One or more network adapters that support Receive Side Scaling (RSS)

This differentiates between multiple network adapters, and multiple connections over a single network adapter, from my interpretation. This makes sense because most clients will NOT have multiple network adapters..

This seems to indicate that SMB Multichannel WILL create multiple connections over a single network adapter to improve throughput. With so many people working remotely, and with SMB performing so poorly over VPN, how do I make SMB use multiple channels over VPN? If I can't, why not?

I researched this months ago, and came to the conclusion that it was either not possible, or would have no benefit. But, the topic has come up again today, and I realize just how little information there is about this apparent performance solution over VPN and how conflicting a lot of the information is.

If I run get-netadapter I see the Cisco AnyConnect:

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed  ----                      --------------------                    ------- ------       ----------             ---------  Ethernet 2                Cisco AnyConnect Secure Mobility Cli...      22 Up           00-05-9A-3C-7A-00       995 Mbps  Bluetooth Network Con...2 Bluetooth Device (Personal Area Ne...#2      18 Disconnected 48-F1-7F-B8-15-4A         3 Mbps  Ethernet                  Realtek USB GbE Family Controller            17 Up           C8-F7-50-BD-BE-B2         1 Gbps  Ethernet 3                Intel(R) Ethernet Connection (7) I21...      16 Disconnected C8-F7-50-40-7F-AF          0 bps  Wi-Fi 2                   Intel(R) Wireless-AC 9560 160MHz             13 Disconnected 48-F1-7F-B8-15-46       702 Mbps  

If I run get-smbclientnetworkinterface I see RSS Capable is False:

Interface Index RSS Capable RDMA Capable Speed    IpAddresses  --------------- ----------- ------------ -----    -----------  22              False       False        995 Mbps {fe80::7462:e71e:9881:4acd, fe80::b2f5:b0a1:835c:2c59, 10.100.20.141}  16              False       False        0  bps   {fe80::8464:d8a6:5624:de65}  17              False       False        1 Gbps   {fe80::e4ad:bb1b:5cdc:7dd7, 192.168.1.103}  13              False       False        702 Mbps {fe80::a9ce:6710:5904:a673, 192.168.1.195}  9               False       False        0  bps   {fe80::bc98:252c:7c3f:acec}  11              False       False        0  bps   {fe80::b035:347f:91cb:d7f}  18              False       False        3 Mbps   {fe80::490b:e922:3662:a942}  

On the server side, the network adapter is RSS capable, and SMB Multichannel is enabled.

In previous network diagnostics I was able to determine SMB v3.1.1 is in use and SMB multichannel was not in use.

If it means anything, we use Cisco AnyConnect connecting to Cisco ASAs.

Port forwarding based on based on destination IP address

Posted: 09 Dec 2021 07:44 AM PST

Need port forwarding from 22 to 15022 when destination IP is 10.12.34.45 All other destination ip's shall remain on port 22.

Played around with iptables an firewalld but got stuck.

OS is Redhat Linux 7.9.

Exchange 2019 Antimalware engine updates download but don't get applied

Posted: 09 Dec 2021 08:19 AM PST

I've been diagnosing for the past day or so some issues with an Exchange 2019 server related to Antimalware filtering/scanning. This was disabled on our server, I enabled it, and restarted the transport service per the Microsoft docs:

In Event Viewer, however, we're getting some logs that indicate this isn't working:

Event 6031, FIPFS: MS Filtering Engine Update process has successfully downloaded updates for Microsoft.    Event 6034, FIPFS: MS Filtering Engine Update process is testing the Microsoft scan engine update    Event 6035, FIPFS: MS Filtering Engine Update process was unsuccessful in testing an engine update.    Engine: Microsoft  

It looks like it fails for some reason and logs "MS Filtering Engine Update process was unsuccessful in testing an engine update."

Then the process repeats and we can see it trying again:

Event 7003, FIPFS: MS Filtering Engine Update process has successfully scheduled all update jobs.    Event 6024, FIPFS: MS Filtering Engine Update process is checking for new engine updates.   Scan Engine: Microsoft    Update Path: http://amupdatedl.microsoft.com/server/amupdate    Event 6030, FIPFS: MS Filtering Engine Update process is attempting to download a scan engine update.   Scan Engine: Microsoft   Update Path: http://amupdatedl.microsoft.com/server/amupdate.    Event 6031, FIPFS: MS Filtering Engine Update process has successfully downloaded updates for Microsoft.    Event 6034, FIPFS: MS Filtering Engine Update process is testing the Microsoft scan engine update    Event 6035, FIPFS: MS Filtering Engine Update process was unsuccessful in testing an engine update.    Engine: Microsoft  

The configuration settings look fine and we've allowed both amupdatedl.microsoft.com and forefrontdl.microsoft.com through the firewall. (It appears that's working because it says downloaded successfully in the Event Viewer logs.) Configuration Settings / Status

Any ideas / help would be much appreciated! Thank you!

Edit: One other note, it does seem to be trying to download and use some of the scan engine updates as evidenced by this staging folder here with recent timestamps. Scan engine temp file downloads

I also found some other resources that suggested a permissions issue, but I checked and Network Service has full permissions to E:\Program Files\Microsoft\Exchange Server\V15\FIP-FS\Data

Things I've looked at:

How do you debug a cronjob that's not working?

Posted: 09 Dec 2021 07:21 AM PST

ini_set('display_errors', 1);  ini_set('display_startup_errors', 1);  error_reporting(E_ALL);    $_SERVER['SERVER_SOFTWARE'] = 'Apache';  $_SERVER['HTTP_USER_AGENT'] = 'PHP';  $_SERVER['PHP_SELF'] = '/wp/wp-admin/admin-ajax.php';    require_once(__DIR__ . '/../misc/autoload.php');    define('WP_ADMIN', 1);    print_r("before");  require_once(__DIR__ . '/../public/wp-load.php');  require_once(__DIR__ . '/../public/wp-includes/rewrite.php');  require_once(__DIR__ . '/../public/wp-admin/includes/file.php');  require_once(__DIR__ . '/../public/wp-admin/includes/misc.php');  require_once(__DIR__ . '/../public/wp-admin/includes/plugin.php');  print_r("after");  

Using some cronjobs in my wp site, and I need to run this block of code, but the code runs without any error, but the code just stops running before "after" gets printed. It's not outputting any error.

I get a warning:

Warning: Cannot modify header information - headers already sent by (output started at /home/alpha/version/ver/v4.3.11/cake.com/scripts/run-resource.php:2) in /home/alpha/version/ver/v4.3.11/cake.com/vendor/cakeauto/phplibs/hg/wordpress/Bootstrap.php on line 150

ini_set('display_errors', 1);  ini_set('display_startup_errors', 1);  error_reporting(E_ALL);  

This bit of code doesn't seem to display any error.

AWS Route53 Configuration creating random subdomains

Posted: 09 Dec 2021 07:13 AM PST

The following article describes the issues I'm currently having with a domain:

Somebody created subdomain without my knowledge. How?

Random subdomains are being created and indexed with search engines, and none of these subdomains are in the DNS config. The only difference is that I'm using Route 53 and not some dodgy free DNS service.

I have a domain askmarina.net. I've put this site live and started testing some search terms to see how we rank. For some reason when I type in something like 'Javea locksmith ask marina' into Google there appears to be random subdomains appearing:

Google search results for Ask Marina

These random subdomains are not a part of my DNS config, and they seem to be stopping the apex domain from ranking at all. They just seem to be randomly creating themselves. I read into a similar issue below and decided the best option would be to remove the wildcard record from the Route 53 config:

Random Sub Domains being generated

These subdomains used to be an exact mirror of the website (just at semi.askmarina.net for example), however since removing the wildcard record all of these domains return a DNS_PROBE_FINISHED_NXDOMAIN error:

http://trainy.askmarina.net/

Does anyone have any insight into what might be happening here? I've exhausted my resources with this issue now, any help would be greatly appreciated.

Importing OVA file Issue

Posted: 09 Dec 2021 06:24 AM PST

I have an ova file that I need to import as a VM instance in Google Cloud Computer Engine.

The OS of the ova file is not known and so it produces an error when importing.

I want to know if there is a workaround solution to this problem.

Many thanks.

¿error 522 servidor cuando cargo pagina web? [closed]

Posted: 09 Dec 2021 05:45 AM PST

Migre mi pagina web a cloud y ahora me aparece error 522 , la pagina esta hecha en wordpress pero no se porque despues de un tiempo genera este error, me ayudan por favor soy nueva en el manejo de cloud

slapd service is not starting on centos 7

Posted: 09 Dec 2021 05:39 AM PST

Hi while I was trying to install open-ldap in my centos 7 server, I think I runned yum remove ldap command(I am not sure exactly) and now slap service is not working at all

[root@kwephispra28828 etc]# systemctl start slapd  Job for slapd.service failed because the control process exited with error code. See "systemctl status slapd.service" and "journalctl -xe" for details.      [root@kwephispra28828 etc]# journalctl -xe  Dec 09 16:36:04 kwephispra28828 runuser[53027]: pam_unix(runuser:session): session opened for user ldap by (uid=0)  Dec 09 16:36:04 kwephispra28828 runuser[53027]: pam_unix(runuser:session): session closed for user ldap  Dec 09 16:36:04 kwephispra28828 slapcat[53031]: DIGEST-MD5 common mech free  Dec 09 16:36:04 kwephispra28828 slapd[53039]: @(#) $OpenLDAP: slapd 2.4.44 (Aug 31 2021 14:48:49) $                                                        mockbuild@x86-02.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.44/openldap-2.4.44/servers/slapd  Dec 09 16:36:05 kwephispra28828 slapd[53039]: main: TLS init def ctx failed: -1  Dec 09 16:36:05 kwephispra28828 slapd[53039]: DIGEST-MD5 common mech free  Dec 09 16:36:05 kwephispra28828 slapd[53039]: slapd stopped.  Dec 09 16:36:05 kwephispra28828 slapd[53039]: connections_destroy: nothing to destroy.  Dec 09 16:36:05 kwephispra28828 systemd[1]: slapd.service: control process exited, code=exited status=1  Dec 09 16:36:05 kwephispra28828 systemd[1]: Failed to start OpenLDAP Server Daemon.  -- Subject: Unit slapd.service has failed  -- Defined-By: systemd  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel  --  -- Unit slapd.service has failed.  --  -- The result is failed.  Dec 09 16:36:05 kwephispra28828 systemd[1]: Unit slapd.service entered failed state.  Dec 09 16:36:05 kwephispra28828 systemd[1]: slapd.service failed.  Dec 09 16:36:05 kwephispra28828 polkitd[6312]: Unregistered Authentication Agent for unix-process:53016:5514800560 (system bus name :1.1155728, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconn  lines 2509-2566/2566   

and this is the systemctl status

[root@kwephispra28828 etc]# systemctl status slapd  ● slapd.service - OpenLDAP Server Daemon     Loaded: loaded (/usr/lib/systemd/system/slapd.service; disabled; vendor preset: disabled)     Active: failed (Result: exit-code) since Thu 2021-12-09 16:36:05 +03; 2min 19s ago       Docs: man:slapd             man:slapd-config             man:slapd-hdb             man:slapd-mdb             file:///usr/share/doc/openldap-servers/guide.html    Process: 53039 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=1/FAILURE)    Process: 53022 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)    Dec 09 16:36:04 kwephispra28828 slapcat[53031]: DIGEST-MD5 common mech free  Dec 09 16:36:04 kwephispra28828 slapd[53039]: @(#) $OpenLDAP: slapd 2.4.44 (Aug 31 2021 14:48:49) $                                                        mockbuild@x86-02.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.44/openldap-2.4.44/servers/slapd  Dec 09 16:36:05 kwephispra28828 slapd[53039]: main: TLS init def ctx failed: -1  Dec 09 16:36:05 kwephispra28828 slapd[53039]: DIGEST-MD5 common mech free  Dec 09 16:36:05 kwephispra28828 slapd[53039]: slapd stopped.  Dec 09 16:36:05 kwephispra28828 slapd[53039]: connections_destroy: nothing to destroy.  Dec 09 16:36:05 kwephispra28828 systemd[1]: slapd.service: control process exited, code=exited status=1  Dec 09 16:36:05 kwephispra28828 systemd[1]: Failed to start OpenLDAP Server Daemon.  Dec 09 16:36:05 kwephispra28828 systemd[1]: Unit slapd.service entered failed state.  Dec 09 16:36:05 kwephispra28828 systemd[1]: slapd.service failed.  

Any idea how to fix this ?

Windows 10 login fails at Samba NT domain member

Posted: 09 Dec 2021 05:24 AM PST

I have the following setup:

  • A Samba server PDC acting as the Primary Domain Controller of a NT domain MYDOMAIN (not Active Directory!)
  • A Windows 10 pro PC Win10 which is also a member of that domain
  • A (new) second Samba server MS1 which is supposed to be a member of that domain. It was added to that domain using the "/usr/bin/net join -U Administrator%Password" command and there was no error.
  • There are no Windows servers involved.
  • This is not a test installation but a production environment in a small company, so there are other Windows 10 PCs accessing PDC and simply changing the PDC configuration is not an option. (I'm what amounts for the network administrator for that company.)
  • Samba version on both servers is 4.7.6-Ubuntu

With a given domain user account MYUSER I can log on fine to Win10. From there I can also access all the shares of PDC.

But the problem is: I cannot access any shares of MS1.

Windows explorer shows a logon dialog for the share and when I supply MYUSER and the password (again) it says "Access is denied".

On the command line "net use \\MS1\ShareName" results in the error "The password is invalid for \\MS1\ShareName", followed by a prompt for username and password for MS1. Entering MYUSER and the password the results in "System error 5 has occurred. Access is denied."

In the log on MS1 for the IP of Win10 I find the following entry:

[2021/12/09 13:57:41.755023,  0] ../source3/auth/auth_util.c:1259(check_account)    check_account: Failed to convert SID S-1-5-21-2503006329-1497337827-313999797-1274    to a UID (dom_user[MYDOMAIN\MYUSER])  

Google found no match for this error message.

testparm on MS1 gives me the following output:

Load smb config files from /etc/samba/smb.conf  rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)  Processing section "[printers]"  Processing section "[homes]"  NOTE: Service homes is flagged unavailable.  Processing section "[ShareName]"  Loaded services file OK.  idmap range not specified for domain '*'  ERROR: Invalid idmap range for domain *!    Server role: ROLE_DOMAIN_MEMBER  

I tried to add an entry for idmap range but it did not make any difference.

I also tried to add MYUSER as a Linux user on MS1 with the same password as in the domain. It did not make any difference.

I'm at a loss on how to investigate this further. Which logs to look into and which configuration options to check.

Google turned up lots of hits but all of them were referring to an Active Directory installation. Unfortunately "simply" updating to ADS is not possible at the moment as that would possibly break other services.

Is it possible to give shell access to a user with no ssh access?

Posted: 09 Dec 2021 05:21 AM PST

Maybe I'm not using the right terms so allow me to explain myself using an example. Connecting over SSH as user "centos" and executing cat /etc/passwd in my Centos7 machine I get:

centos:x:1000:1000:Cloud User:/home/centos:/bin/bash  www:x:1001:1001::/home/www:/sbin/nologin  

Now, if try sudo su www - I get: This account is currently not available.

That according to my limited knowledge is due to the nologin part. What I want to do is to be able to switch to the www user in the SSH session without (if possible) giving the www user the possibility to access directly the server over SSH, like when the root user tries to SSH and get an error telling that you are supposed to log in as centos and then change to root if needed.

What will be the steps to achieve the desired behavior?

mi relay de cara a internet permite el envio de correos entre usuarios internos [closed]

Posted: 09 Dec 2021 05:26 AM PST

tengo montado en mi empresa un servisor de correos Zimbra y un relay postfix para hacer de pasarela de cara a internet. los correos entre usuarios locales se supone deben enviarse solo a traves del zimbra. sin embargo al hacersele un telnet al postfix se detecta q este permite q usuarios internos envien correos a usuarios internos inclus si las direcciones de origen y destino no son reales. Cómo podria impedir q mi postfix haga esto?

Anexo el telnet realizado desde afuera de mi dominio:  === Trying 200.55.177.194:25...  === Connected to 200.55.177.194.  <- 220 serverlaguito.laguito.co.cu ESMTP Postfix (FreeBSD)  -> EHLO mercurio.cucert.cu  <- 250-serverlaguito.laguito.co.cu  <- 250-PIPELINING  <- 250-SIZE 2097152  <- 250-VRFY  <- 250-ETRN  <- 250-ENHANCEDSTATUSCODES  <- 250-8BITMIME  <- 250 DSN  -> MAIL FROM:<facturacion@laguito.co.cu>  <- 250 2.1.0 Ok  -> RCPT TO:<admin@laguito.co.cu>  <- 250 2.1.5 Ok  -> DATA  <- 354 End data with <CR><LF>.<CR><LF>  -> Date: Wed, 27 Oct 2021 11:50:55 -0400  -> To: admin@laguito.co.cu  -> From: facturacion@laguito.co.cu  -> Subject:Test OSRI Wed, 27 Oct 2021 11:50:55 -0400  -> X-Mailer: swaks v jetmore.org/john/code/swaks/  ->  -> Test de seguridad:  -> Test 1: usuario interno > usuario_interno...  ->  ->  -> .  <- 250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 3E786586044  -> QUIT  <- 221 2.0.0 Bye  === Connection closed with remote host.  

DHCP server failover with dynamic bootp range declaration

Posted: 09 Dec 2021 04:49 AM PST

I am trying to configure a failover for a ISC dhcp server running on Linux that has dynamic-bootp range declaration.

The configuration file looks like this (I am trying it locally first, therefore private ranges):

authoritative;  log-facility local7;  shared-network "vm-net" {  failover peer "failover-partner" {       secondary;       address 192.168.122.4;       port 647;       peer address 192.168.122.3;       peer port 647;       max-response-delay 60;       max-unacked-updates 10;       load balance max seconds 3;  }    subnet 192.168.122.0 netmask 255.255.255.128 {        pool {            failover peer "failover-partner";            max-lease-time 1800;            range 192.168.122.0 192.168.122.127;        }         deny unknown-clients;    }    subnet 192.168.122.128 netmask 255.255.255.128 {        pool {            failover peer "failover-partner";            max-lease-time 1800;            range dynamic-bootp 192.168.122.128 192.168.122.255;        }         deny unknown-clients;    }  }  

However the daemon reload fails with the following syslog error message:

Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: range declarations where there is a failover  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: peer in scope.   If you wish to declare an  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: address range from which dynamic bootp leases  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: can be allocated, please declare it within a  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: pool declaration that also contains the "no  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: failover" statement.   The failover protocol  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: itself does not permit dynamic bootp - this  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: is not a limitation specific to the ISC DHCP  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: server.   Please don't ask me to defend this  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: until you have read and really tried to understand  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: the failover protocol specification.  Dec  7 14:59:07 dhcpmaster1 dhcpd[4397]: Configuration file errors encountered -- exiting  

Does it mean, that ISC dhcp protocol does not support failover of dynamic ranges? Or is there any other way how to configure it?

I can not find any further information in the man pages and isc.org appears to be down/unreachable.

Any advise would be appreciated.

Get kubernetes pods to take on a specific identity / role / job once started

Posted: 09 Dec 2021 04:44 AM PST

What I am presenting here is a simplified version of my setup and the syntax is wrong.

I have a list of high level "agents" that I want to deploy as pods on a kubernetes cluster.

Some nodes in this cluster have special hardware, some don't, but all the pods should use the same container.

Could look like (again: syntax is wrong):

agent1 wlan  ...  agent8 wlan  agent9 wlan rs232.1  agent10 wlan rs232.2  

I setup a deployment that roughly looks like that (the syntax is wrong):

deployment (standard nodes)    replicas: 8  ---  deployment (rs232, terminal 1)    replicas: 1    nodeSelector:      rs232=1    env:      rs232=1 (because nodeSelector can't be passed nicely afaik)  ---  deployment (rs232, terminal 2)    replicas: 1    nodeSelector:      rs232=2    env:      rs232=2  

I would like each of these agents to start up and take on one identity from the list, while having the matching hardware of course.

So like a pod would talk to a role distribution service like that:

pod - Hi, I'm pod/container-xyz and I have env/label/annotation rs232=2  service - Hi, well then you are now agent10  pod - OK I'll do the work of agent10 then  (service - cool, I'll add agent10 as an alias to you in the cluster DNS) <- that will be my next question  

Same thing for agents no with special hardware: the service gives them one agent role each.

I tried something with StatefulSet but that does not fit the bill (because of the various nodeSelector and they are super slow to start).

I started to think about a dedicated self-implemented server pod that would keep my agent list and lease them to matching hardware (a bit like dhcp server) but I'm pretty sure I am not the only one having this problem and there must be a solution out there. I looked quickly at Zookeeper but I don't think it is for me. I am probably not finding the solution because my vocabulary is not the correct one I guess.

Has anybody got a good idea ? Am I going in the right direction ?

MySQL is 100% CPU over the past week, no idea why, details below

Posted: 09 Dec 2021 08:17 AM PST

I'm trying to work out why MySQL is hitting 100% over the past week:

https://i.imgur.com/NQH5dR0.png

It's a server that does network/server monitoring called LibreNMS.

Top: https://pastebin.com/1qypfm0n

Here I access MySQL and look at the processes: https://pastebin.com/zCVMh2cL

and full processes https://pastebin.com/0ZFhqpiF

I then found a tool called MySQLTuner, but I'm not experienced enough to understand the results: https://pastebin.com/yBQzCuQU

Any advice would be most appreciated.

why domains like mail.google.com or music.youtube.com not work wit www [migrated]

Posted: 09 Dec 2021 06:39 AM PST

If we try to access www.mail.google.com or www.music.youtube.com. They wouldn't work. why? Also if we want how can this be achieved

Blocked loading mixed active content

Posted: 09 Dec 2021 05:38 AM PST

I am using react as my font end, and nodejs as backend which is running on localhost:3016 and I used nginx as reverse proxy and load balancer; this my nginx conf file for the site

upstream load_balance{      #least_conn;       #ip_hash;      server  localhost:3016;      #server localhost:8010;      #server localhost:8011;      #server localhost:8012;      #server localhost:8013;      #server localhost:8014;      #server localhost:8015;      #server localhost:8016;      #server localhost:8017;      #server localhost:8018;  }    server {        # SSL configuration      #      # listen 443 ssl default_server;      # listen [::]:443 ssl default_server;      #      # Note: You should disable gzip for SSL traffic.      # See: https://bugs.debian.org/773332      #      # Read up on ssl_ciphers to ensure a secure configuration.      # See: https://bugs.debian.org/765782      #      # Self signed certs generated by the ssl-cert package      # Don't use them in a production server!      #      # include snippets/snakeoil.conf;      # listen [::]:443 ssl ipv6only=on; # managed by Certbot          listen 443 ssl; # managed by Certbot          ssl_certificate /etc/letsencrypt/live/ethiolive.net/fullchain.pem; # managed by Certbot          ssl_certificate_key /etc/letsencrypt/live/ethiolive.net/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot          #root /var/www/html;        # Add index.php to the list if you are using PHP      #index index.html index.htm index.nginx-debian.html;        server_name ethiolive.net www.ethiolive.net;            add_header 'Access-Control-Allow-Origin' '*' always;          add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE' always;          add_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin,x-auth' always;          #default_type application/json;            location /api {          proxy_pass  http://load_balance;          proxy_http_version  1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection 'upgrade';          proxy_set_header Host  $host;          proxy_cache_bypass $http_upgrade;          # First attempt to serve request as file, then          # as directory, then fall back to displaying a 404.          # try_files $uri $uri/ =404;          if ($request_method = 'OPTIONS') {              add_header 'Access-Control-Allow-Origin' '*';              add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';              #              # Custom headers and headers various browsers *should* be OK with but aren't              #              add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';              #              # Tell client that this pre-flight info is valid for 20 days              #              add_header 'Access-Control-Max-Age' 1728000;              add_header 'Content-Type' 'text/plain; charset=utf-8';              add_header 'Content-Length' 0;              return 204;           }           if ($request_method = 'POST') {              add_header 'Access-Control-Allow-Origin' '*' always;              add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;              add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;              add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;           }           if ($request_method = 'GET') {              add_header 'Access-Control-Allow-Origin' '*' always;              add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;              add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;              add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;           }      }          location / {          root /var/www/html/LiveStream/LiveStream-frontend/users/build;          index index.html index.htm;          }      location /admin {          root /var/www/html/LiveStream/LiveStream-frontend/admin/build;      index index.html index.htm;      }      location /socket/ {                  proxy_pass http://load_balance/socket.io/;          proxy_redirect          off;                    proxy_http_version      1.1;                    proxy_set_header        Upgrade                 $http_upgrade;                  proxy_set_header        Connection              "upgrade";                  proxy_set_header        Host                    $host;                  proxy_set_header        X-Real-IP               $remote_addr;                  proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;          }                         location /socket.io/{                            proxy_pass http://load_balance/socket.io/;          proxy_set_header Upgrade $http_upgrade;                  proxy_set_header Connection "upgrade";                  proxy_http_version 1.1;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_set_header Host $host;                       }             ```  I get this error   

2.2664eafa.chunk.js:2 Mixed Content: The page at 'https://www.ethiolive.net/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://192.168.8.101:3016/catagorey/getMainCategorie'. This request has been blocked; the content must be served over HTTPS.``` from chromes console, I think I am missing something on the nginx configuration, thanks

netplan apply/try/generate ends with ERROR

Posted: 09 Dec 2021 07:53 AM PST

We have cloud infrastructure based on VMWare with Windows and Linux VMs. After last reboot 4 of the Ubuntu (3 Ubuntu 20.04 and one Ubuntu 16.04) servers did not start network interface. With lshw -class network I see correct network interface listed. There is no DHCP in the network, all servers use static IP's. After reboot in networkctl OPERATIONAL column for the specific interface is OFF. Only way to get network working is with following IP command sequence, but after reboot everything is gone:

$ip link set <link_name> up  $ip addr add <server-ip>/24 dev <link_name>  $ip route add default via <gateway> dev <link_name>  

Looks like the problem is with netplan. I have netplan config, that is deployed together with server, when created from template and it works great on all the other Ubuntu servers in this infrastructure except those 4 servers. Also it worked on those servers until this weeks reboot (we update and reboot once a month usually) Config looks like this:

network:    version: 2    renderer: networkd    ethernets:      <link_name>:        dhcp4: no        dhcp6: no        addresses:          - <server_ip>/24        gateway4: <gateway>        nameservers:          search:            - <domain>          addresses:            - <dns_1>            - <dns_2>  

But when trying to netplan apply , netplan generate or netplan try, it returns strange ERROR, I cant find anything about in the internet.( I substituted my gateway IP with <correct_gateway> and the other IP in this operations with <some_random_ip> for security purposes)

ERROR:src/parse.c:1120:handle_gateway4: assertion failed (scalar(node) == cur_netdef->gateway4): ("<correct_gateway>" == "<some_random_ip>")  Bail out! ERROR:src/parse.c:1120:handle_gateway4: assertion failed (scalar(node) == cur_netdef->gateway4): ("<correct_gateway>" == "<some_random_ip>")    

If I add some indentation mistake in *.yaml config file it returns normal Error message that points to this mistake.

I tried to reinstall netplan.io without any luck and don't have an idea what to try next.

fail2ban error with ssh-ddos Unable to read the filter 'sshd-ddos'

Posted: 09 Dec 2021 05:01 AM PST

I am trying to get fail2ban working with ssh. I have changed the ssh port to 900 and have the following in /etc/fail2ban/jail.local:

[sshd]  enabled = true  port = 900  logpath = %(sshd_log)s  

This works without any issues. However when I add an entry for [sshd-ddos]:

[sshd-ddos]  # This jail corresponds to the standard configuration in Fail2ban.  # The mail-whois action send a notification e-mail with a whois request  # in the body.  enabled = true  port = 900  logpath = %(sshd_log)s  

and run sudo service fail2ban restart and then check the status with sudo systemctl status fail2ban I can see there is an error:

   Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)     Active: active (running) since Tue 2020-09-22 17:06:29 CST; 4s ago       Docs: man:fail2ban(1)    Process: 7477 ExecStop=/usr/bin/fail2ban-client stop (code=exited, status=0/SUCCESS)    Process: 7478 ExecStartPre=/bin/mkdir -p /var/run/fail2ban (code=exited, status=0/SUCCESS)   Main PID: 7483 (fail2ban-server)      Tasks: 3 (limit: 1107)     CGroup: /system.slice/fail2ban.service             └─7483 /usr/bin/python3 /usr/bin/fail2ban-server -xf start    Sep 22 17:06:29 twitter-builder systemd[1]: Stopped Fail2Ban Service.  Sep 22 17:06:29 twitter-builder systemd[1]: Starting Fail2Ban Service...  Sep 22 17:06:29 twitter-builder systemd[1]: Started Fail2Ban Service.  Sep 22 17:06:29 twitter-builder fail2ban-server[7483]:  Found no accessible config files for 'filter.d/sshd-ddos' under /etc/fail2ban  Sep 22 17:06:29 twitter-builder fail2ban-server[7483]:  Unable to read the filter 'sshd-ddos'  Sep 22 17:06:29 twitter-builder fail2ban-server[7483]:  Errors in jail 'sshd-ddos'. Skipping...  Sep 22 17:06:29 twitter-builder fail2ban-server[7483]: Server ready  

What am I missing here?

Recover deleted LVM signature

Posted: 09 Dec 2021 07:07 AM PST

I guess I've deleted LVM partition. I expanded the datastore in vmware ( 70 GB to 90 GB ), then I run echo 1> /sys/class/block/sdb/device/rescan.

After that I run fdisk /dev/sdb. it shows my sdb is 90 GB but with this warning : The old LVM2_member signature will be removed by a write command. I did enter w which I guess was a bad idea. Now none of these commands show anything: lvs, vgs, pvs

With blkid I see my sdb UUID has changed:

/dev/sda1: UUID="b96e5429-d28e-4102-9085-4f303642a26e" TYPE="ext4" PARTUUID="0ab90198-01"  /dev/mapper/vg00-vol_db: UUID="4ed1927e-620a-4bf9-b656-c208f31e6ea3" TYPE="ext4"  /dev/sdb: PTUUID="d6c28699" PTTYPE="dos"  

I run vgcfgrestore vg00 --test -f vg00_00001-2029869851.vg which is the last file before today's changes. (it's for 2 months ago when I created the LVM) But it returned

  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.    Couldn't find device with uuid 4deOKh-FeJz-8JqG-SAyX-KviL-UGu4-PtJ138.    Cannot restore Volume Group vg00 with 1 PVs marked as missing.    Restore failed.  

How can I restore this mess ? Thanx a lot

Why is OpenDMARC using my (the recipients) configuration for incoming mail?

Posted: 09 Dec 2021 08:03 AM PST

Recently I've had some incoming emails be rejected by my mail server for failing DMARC checks. Upon closer inspection I noticed that the logs mentioned that the rejection was because OpenDMARC was applying my policy instead of the sender's policy. A, slightly redacted, example of a failed exchange sending mail from info@random.tld to random.tld@mydomain.tld would be

postfix/smtpd[19698]: connect from mail-eopbgr80052.outbound.protection.outlook.com[40.107.8.52]  postfix/smtpd[19698]: Anonymous TLS connection established from mail-eopbgr80052.outbound.protection.outlook.com[40.107.8.52]: TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)  policyd-spf[19706]: Pass; identity=helo; client-ip=40.107.8.52; helo=eur04-vi1-obe.outbound.protection.outlook.com; envelope-from=info@random.tld; receiver=random.tld@mydomain.tld  policyd-spf[19706]: Pass; identity=mailfrom; client-ip=40.107.8.52; helo=eur04-vi1-obe.outbound.protection.outlook.com; envelope-from=info@random.tld; receiver=random.tld@mydomain.tld  postfix/smtpd[19698]: 3578F66A0006: client=mail-eopbgr80052.outbound.protection.outlook.com[40.107.8.52]  postfix/cleanup[19707]: 3578F66A0006: message-id=<AM5PR0201MB22602B5B4998B49514A63C76B2540@AM5PR0201MB2260.eurprd02.prod.outlook.com>  opendkim[598]: 3578F66A0006: mail-eopbgr80052.outbound.protection.outlook.com [40.107.8.52] not internal  opendkim[598]: 3578F66A0006: not authenticated  opendkim[598]: 3578F66A0006: failed to parse Authentication-Results: header field  opendkim[598]: 3578F66A0006: DKIM verification successful  opendkim[598]: 3578F66A0006: s=selector1-random.tld d=random.onmicrosoft.com SSL  opendmarc[605]: implicit authentication service: mail.mydomain.tld  opendmarc[605]: 3578F66A0006 ignoring Authentication-Results at 1 from vps.mydomain.tld  opendmarc[605]: 3578F66A0006: mydomain.tld fail  postfix/cleanup[19707]: 3578F66A0006: milter-reject: END-OF-MESSAGE from mail-eopbgr80052.outbound.protection.outlook.com[40.107.8.52]: 5.7.1 rejected by DMARC policy for mydomain.tld; from=<info@random.tld> to=<random.tld@mydomain.tld> proto=ESMTP helo=<EUR04-VI1-obe.outbound.protection.outlook.com>  postfix/smtpd[19698]: disconnect from mail-eopbgr80052.outbound.protection.outlook.com[40.107.8.52] ehlo=2 starttls=1 mail=1 rcpt=1 data=0/1 quit=1 commands=6/7  

Note the third line from the bottom. In this specific case the sender does not have their own DMARC policy. Other emails send from info@random.tld tend to arrive just fine.

In the last year it has also failed once on forwarding my work account and once forwarding my university account.

Can this behavior be caused by a certain (mis)configuration on either end, or is this a bug?

I am running OpenDMARC version 1.3.1. With the following config, trimmed for clarity:

##  AuthservID (string)  ##      defaults to MTA name  #  AuthservID mail.mydomain.tld  PidFile /var/run/opendmarc.pid  ##  RejectFailures { true | false }  ##      default "false"  ##  RejectFailures true  Syslog true  UserID opendmarc:opendmarc  PublicSuffixList /usr/share/publicsuffix/  IgnoreAuthenticatedClients true  

Nginx - can "if" be safely used at the level of a server block?

Posted: 09 Dec 2021 08:05 AM PST

The Nginx docs warn in no uncertain terms that if is evil and should be avoided wherever possible, and there are similar warnings scattered across the length and breadth of the internet.

However, most of these warnings focus specifically on how badly if behaves in location blocks. Furthermore, the Nginx docs say that:

The only 100% safe things which may be done inside if in a location context are:

  • return ...;

  • rewrite ... last;

My question is: Is it safe to use an if at the level of the server block (rather than a location block) if the only directive contained therein is a return? For example, the following www-to-non-www redirect:

server {      listen 80;      server_name example.com www.example.com;        if ($http_host != example.com) {          return 301 http://example.com$request_uri;      }        # more config  }  

Secondary question: I'm aware that the recommended method for doing this sort of thing is with two server blocks that have different values for server_name. If this is an acceptable use for if, is there any reason to still use two separate server blocks?

GET to NGINX reverse proxy works but POST gives me a 502 Bad Gateway Response

Posted: 09 Dec 2021 07:07 AM PST

I am using NGINX as a reverse proxy and just put up a new service written in Go.

The service has two endpoints

GET /tracking/ping    POST /tracking/customer  

In NGINX, I am using the following to proxy the request

location /v1/location/ {      proxy_pass http://path-to-tracking-service:8181/;  }  

When curl the two endpoints such as the following, I get different results.

The GET /tracking/ping endpoint

curl -X GET https://example.com/v1/location/tracking/ping  "Pong!"  

The 'POST /tracking/customer` endpoint

curl -H "Content-Type: application/json" -d '{"userId":"1234"}'  https://example.com/v1/location/tracking/customer  <html>  <head><title>502 Bad Gateway</title></head>  <body bgcolor="white">  <center><h1>502 Bad Gateway</h1></center>  <hr><center>nginx/1.9.12</center>  </body>  

Not sure why this would happen. I am proxying other services I have and POST requests work perfectly fine.

Here is the nginx.conf

user nginx; worker_processes 1;

error_log  /var/log/nginx/error.log warn;  pid        /var/run/nginx.pid;      events {      worker_connections  1024;  }    http {      include       /etc/nginx/mime.types;      default_type  application/octet-stream;        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                      '$status $body_bytes_sent "$http_referer" '                      '"$http_user_agent" "$http_x_forwarded_for"';        access_log  /var/log/nginx/access.log  main;        sendfile        on;      #tcp_nopush     on;        keepalive_timeout  65;        proxy_connect_timeout       600;  proxy_send_timeout          600;  proxy_read_timeout          600;  send_timeout                600;        #gzip  on;        #include /etc/nginx/conf.d/*.conf;        #server {          #include /etc/nginx/sites-enabled/*;      #}        server {          listen 80;          server_name *.example.com;          #return 301 https://$host$request_uri;          include /etc/nginx/sites-enabled/*;      }        server {          #listen 80;      listen 443 ssl;      server_name *.example.com;        ssl_certificate           /etc/ssl/example.crt;      ssl_certificate_key       /etc/ssl/example.key;        #ssl on;      ssl_session_cache  builtin:1000  shared:SSL:10m;      ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;      ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;      ssl_prefer_server_ciphers on;        include /etc/nginx/sites-enabled/*;      }  }  

I have separate files that are being linked to /sites-enabled that include my proxy_params declarations.

Two of them are the following

location /v1/location/ {      proxy_pass http://example.com:8181/;  }    location /v1/ {      proxy_pass http://example.com:8282/;  }  

I could see their maybe being an issue with it getting confused by the /v1 on both the proxies, but it works for the GET endpoint.

EDIT

Some people have brought up the point that it may be panicking so I checked the docker logs for the go container and got the following

location-tracking-staging-1 | 2016-03-14T02:35:33.580963673Z 2016/03/14 02:35:33 http: panic serving 10.7.1.5:35613: no reachable servers  location-tracking-staging-1 | 2016-03-14T02:35:33.581005488Z goroutine 97 [running]:  location-tracking-staging-1 | 2016-03-14T02:35:33.581012905Z net/http.(*conn).serve.func1(0xc820057b00)  location-tracking-staging-1 | 2016-03-14T02:35:33.581017348Z    /usr/local/go/src/net/http/server.go:1389 +0xc1  location-tracking-staging-1 | 2016-03-14T02:35:33.581030498Z panic(0x81e620, 0xc82013c5e0)  location-tracking-staging-1 | 2016-03-14T02:35:33.581034545Z    /usr/local/go/src/runtime/panic.go:426 +0x4e9  location-tracking-staging-1 | 2016-03-14T02:35:33.581038792Z main.RepoCreateVendorLocation(0xc82011ecb8, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)  location-tracking-staging-1 | 2016-03-14T02:35:33.581042502Z    /go/src/location-tracking/repo.go:19 +0x178  location-tracking-staging-1 | 2016-03-14T02:35:33.581047145Z main.VendorLocationCreate(0x7f8a4366d978, 0xc8200c2ea0, 0xc820119260)  location-tracking-staging-1 | 2016-03-14T02:35:33.581050747Z    /go/src/location-tracking/handlers.go:63 +0x47b  location-tracking-staging-1 | 2016-03-14T02:35:33.581054911Z net/http.HandlerFunc.ServeHTTP(0x9965b0, 0x7f8a4366d978, 0xc8200c2ea0, 0xc820119260)  location-tracking-staging-1 | 2016-03-14T02:35:33.581058786Z    /usr/local/go/src/net/http/server.go:1618 +0x3a  location-tracking-staging-1 | 2016-03-14T02:35:33.581062770Z github.com/gorilla/mux.(*Router).ServeHTTP(0xc820010640, 0x7f8a4366d978, 0xc8200c2ea0, 0xc820119260)  location-tracking-staging-1 | 2016-03-14T02:35:33.581066604Z    /go/src/github.com/gorilla/mux/mux.go:103 +0x270  location-tracking-staging-1 | 2016-03-14T02:35:33.581070176Z net/http.serverHandler.ServeHTTP(0xc820056300, 0x7f8a4366d978, 0xc8200c2ea0, 0xc820119260)  location-tracking-staging-1 | 2016-03-14T02:35:33.581073992Z    /usr/local/go/src/net/http/server.go:2081 +0x19e  location-tracking-staging-1 | 2016-03-14T02:35:33.581077629Z net/http.(*conn).serve(0xc820057b00)  location-tracking-staging-1 | 2016-03-14T02:35:33.581081221Z    /usr/local/go/src/net/http/server.go:1472 +0xf2e  location-tracking-staging-1 | 2016-03-14T02:35:33.581084811Z created by net/http.(*Server).Serve  location-tracking-staging-1 | 2016-03-14T02:35:33.581088336Z    /usr/local/go/src/net/http/server.go:2137 +0x44e  

Recreate XFS Partition with Existing Filesystem

Posted: 09 Dec 2021 06:01 AM PST

I have a CentOS server with two RAID arrays. The OS array and a DATA array. Long story short, recently had tons of trouble with the OS and had to reload CentOS (was 5.7 and went to 6.5). Everything is working with the OS portion now. However, I'm having an issue with my DATA array.

The array shows up as /dev/sdb but I used to have a partition (/dev/sdb1) where all my data is stored. The server doesn't see /dev/sdb1 anymore so my best guess is the partition table is somehow messed up. When I print in parted, it lists the partition type as loop:

NON-WORKING SERVER:  Model: Adaptec DATA (scsi)  Disk /dev/sdb: 59.9TB  Sector size (logical/physical): 512B/512B  Partition Table: loop    Number  Start  End     Size    File system  Flags   1      0.00B  59.9TB  59.9TB  xfs  

I have another server that is an exact duplicate of this one and it appears correctly in parted and the /dev/sdb1 is visible:

WORKING SERVER:  Model: Adaptec STORAGE (scsi)  Disk /dev/sdb: 59.9TB  Sector size (logical/physical): 512B/512B  Partition Table: gpt    Number  Start   End     Size    File system  Name     Flags   1      17.4kB  59.9TB  59.9TB  xfs          primary         

Is there any way for me to fix the partition so that I don't destroy my data? I desperately need to keep the data but just can't seem to figure out if there is a way to fix the partition and/or why it's showing up as loop.

Thank you for any help!!

ADDITIONAL INFORMATION:

fdisk -l /dev/sdb:

Disk /dev/sdb: 59914.8 GB, 59914783293440 bytes  255 heads, 63 sectors/track, 7284224 cylinders  Units = cylinders of 16065 * 512 = 8225280 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disk identifier: 0xf0b62000  

blkid:

/dev/sda1: UUID="9de0d036-a86f-4557-8e09-a5ccd33c66be" TYPE="ext4"   /dev/sda2: UUID="VYyo4L-lkdG-GivI-a6eM-pFfX-TOoV-E1LxcF" TYPE="LVM2_member"   /dev/mapper/VolGroup-lv_root: UUID="18bbe93e-9b67-4343-8d71-71bd087ab145" TYPE="ext4"   /dev/sdb: LABEL="Data" UUID="dfda2895-d1cd-4b3e-8453-e5c51c093260" TYPE="xfs"   /dev/mapper/VolGroup-lv_swap: UUID="d71f193f-acd4-4aea-8d11-be2acd4575f3" TYPE="swap"   /dev/mapper/VolGroup-lv_home: UUID="223b2be8-f9e9-4671-bc84-e5aa5f73b697" TYPE="ext4"   

parted /dev/sdb unit s print (NON-WORKING SERVER):

Model: Adaptec DATA (scsi)  Disk /dev/sdb: 117021061120s  Sector size (logical/physical): 512B/512B  Partition Table: loop    Number  Start  End            Size           File system  Flags   1      0s     117021061119s  117021061120s  xfs  

parted /dev/sdb unit s print (WORKING SERVER):

Model: Adaptec STORAGE (scsi)  Disk /dev/sdb: 117021061119s  Sector size (logical/physical): 512B/512B  Partition Table: gpt    Number  Start  End            Size           File system  Name     Flags   1      34s    117021061086s  117021061053s  xfs          primary         

site to site vpn between sonicwall and pfsense

Posted: 09 Dec 2021 08:03 AM PST

The problem i am facing is establishment of a site to site VPN in between pfSense( version 2.0.1) and SonicWall Pro2040 Enhanced ( Firmware Version: SonicOS Enhanced 4.2.1.4-7e) . All of the configuration is done properly , still i got the following error in sonicwall -enter image description here

Phase 1 and 2 passes properly but problem with "Payload processing" i found that it could be for shared key mismatch but I double check , no mismatch with shared key in both firewall . It also shows in sonicwall that tunnel is active- enter image description here

The log from pfSense is below - enter image description here

In pfSense the tunnel shows inactive .

I am not too expert in firewall, so I will be grateful if will receive a proper guideline in this regard,

Apache to listen to multiple ports

Posted: 09 Dec 2021 05:01 AM PST

I have achieved setting up multiple virtual NIC in 1 VM, and bind websites to different IP on 1 server, e.g., access 2 websites by:

http://10.188.2.150  http://10.188.2.152    <--both IPs belong to the same server  

The drawback is that in virutalbox, 1 VM can have 4 network card maximum.

Then, I am trying out the following: get 2 websites bind to the same IP address but different ports, so that I can access the 2 websites like this:

http://10.188.2.150:8003  http://10.188.2.150:8004  

(I have checked that these 2 ports are not in use by 'lsof -i :[port]')

But I cannot access the website by opening, e.g. : http://10.188.2.150:8003

I set up my httpd.conf like this:

Listen 10.188.2.150:8003  Listen 10.188.2.152:8004    <VirtualHost 10.188.2.150:8003>  ServerName 10.188.2.150:8002  ServerAdmin admin@localhost  DocumentRoot /var/www/html/panda-web/html  <Directory /var/www/html/panda-web/html>    AllowOverride All  </Directory>  RewriteEngine On  RewriteOptions Inherit  </VirtualHost>    <VirtualHost 10.188.2.150:8004>  ServerName 10.188.2.150:8004  ServerAdmin admin@localhost  DocumentRoot /var/www/html/sheep-web/html  <Directory /var/www/html/sheep-web/html>    AllowOverride All  </Directory>  RewriteEngine On  RewriteOptions Inherit  </VirtualHost>  

I've looked apache website but I must have missed something? Has anyone tried this?

nohup.out file created as empty

Posted: 09 Dec 2021 06:01 AM PST

I'm trying to create the thread dump of the java process and for that i'm using nohup kill -3 command. In that case nohup.out file is created but it's empty. Even i tried redirecting the file to some other locations and even the created file is also empty.

Can anybody please help me out?

Thanks Ram

No comments:

Post a Comment