Wednesday, December 15, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Unable to send emails to outside domain. I can receive but can't send

Posted: 15 Dec 2021 09:07 AM PST

I am able to send and receive emails to my own domain email id's but can't send those to outside domains. There is no error and I can receive emails from anyone & also see emails in the sent folder but the recipient does not get those.

Create sudo rules on freeipa using script

Posted: 15 Dec 2021 08:44 AM PST

I need a script who create sudo rules on freeipa using csv file.

Can someone help me with this script.

Thank you

Access Denied when mounting Kerberised NFS v4 Share

Posted: 15 Dec 2021 08:37 AM PST

I want to mount an NFS4 share, but with Kerberos security enabled. This is my setup:

  • Debian Server (nfsv4test)

  • Debian Client (nfsv4client)

  • Windows ADC, acts also as KDC

  • My realm is REALM.EXAMPLE.ORG

  • The subnet where the both Debian machines are located in is called subnet.example.org

  • There is no NAT going on.

  • Both machines are up-to-date.

So as I'm still struggling with Kerberos, that is how I tried to archieve my goal:

Chapter I: Setup

1- Put both machines in the same Realm/Domain (This has already been set up by others and works)

2- Created two users (users, not computers!) per machine: nfs-nfsv4client, host-nfsv4client, nfs-nfsv4test and host-nfsv4test After the creation I enabled AES256 Bit encryption for all of the accounts.

3- Set a service principal for the users, but with all 3 notations to cover all cases:

setspn -S nfs/nfsv4test nfs-nfsv4test  setspn -S nfs/nfsv4test.subnet.example.org nfs-nfsv4test  setspn -S nfs/nfsv4test.subnet.example.org@REALM.EXAMPLE.ORG nfs-nfsv4test  

I did this for all 4 users/principals.

3- Created the keytabs on the Windows KDC:

ktpass -princ host/nfsv4test.subnet.example.org@REALM.EXAMPLE.ORG +rndPass -mapuser host-nfsv4test@REALM.EXAMPLE.ORG -pType KRB5_NT_PRINCIPAL -out c:\temp\host-nfsv4test.keytab -crypto AES256-SHA1  

So after that I had 4 keytabs.

4- Merged the keytabs on the server (and client):

ktutil    read_kt host-nfsv4test.keytab     read_kt nfs-nfsv4test.keytab      write_kt /etc/krb5.keytab  

The file has 640 permissions.

5- Exported the directories on the server; this has already worked without kerberos. With Kerberos enabled, the export file looks like this:

/srv/kerbnfs4 gss/krb5(rw,sync,fsid=0,crossmnt,no_subtree_check,insecure)  /srv/kerbnfs4/homes gss/krb5(rw,sync,no_subtree_check,insecure)  

Running exportfs -rav works:

root@nfsv4test:~# exportfs -rav  exporting gss/krb5:/srv/kerbnfs4/homes  exporting gss/krb5:/srv/kerbnfs4  

...and on the client I can view the mounts on the server:

root@nfsv4client:~# showmount -e nfsv4test.subnet.example.org  Export list for nfsv4test.subnet.example.org:  /srv/kerbnfs4/homes gss/krb5  /srv/kerbnfs4       gss/krb5  

6a- the krb5.conf has the default config for the enviroment it's was set up for and I havn't changed anything:

[libdefaults]      ticket_lifetime = 24000      default_realm = REALM.EXAMPLE.ORG      default_tgs_entypes = rc4-hmac des-cbc-md5      default_tkt__enctypes = rc4-hmac des-cbc-md5      permitted_enctypes = rc4-hmac des-cbc-md5      dns_lookup_realm = true      dns_lookup_kdc = true      dns_fallback = yes    # The following krb5.conf variables are only for MIT Kerberos.      kdc_timesync = 1      ccache_type = 4      forwardable = true      proxiable = true    # The following libdefaults parameters are only for Heimdal Kerberos.      fcc-mit-ticketflags = true    [realms]      REALM.EXAMPLE.ORG = {          kdc = kdc.realm.example.org          default_domain = kds.realm.example.org      }    [domain_realm]      .realm.example.org = KDC.REALM.EXAMPLE.ORG      realm.example.org = KDC.REALM.EXAMPLE.ORG    [appdefaults]  pam = {     debug = false     ticket_lifetime = 36000     renew_lifetime = 36000     forwardable = true     krb4_convert = false  }  

6- Then I set up my sssd.conf like this, but I havn't really understood what's going on here:

[sssd]  domains = realm.example.org  services = nss, pam  config_file_version = 2    [nss]  filter_groups = root  filter_users = root  default_shell = /bin/bash    [pam]  reconnection_retries = 3    [domain/realm.example.org]  krb5_validate = True  krb5_realm = REALM.EXAMPLE.ORG  subdomain_homedir = %o  default_shell = /bin/bash  cache_credentials = True  id_provider = ad  access_provider = ad  chpass_provider = ad  auth_provide = ad  ldap_schema = ad  ad_server = kdc.realm.example.org  ad_hostname = nfsv4test.subnet.example.org  ad_domain = realm.example.org  ad_gpo_access_control = permissive  use_fully_qualified_names = False  ad_enable_gc = False  

7- idmap.conf on both machines:

[General]    Verbosity = 0  Pipefs-Directory = /run/rpc_pipefs    Domain = realm.example.org    [Mapping]    Nobody-User = nobody  Nobody-Group = nogroup  

8- And /etc/default/nfs-common on both machines:

NEED_STATD=yes  NEED_IDMAPD=yes  NEED_GSSD=yes  

9- Last but not least, nfs-kernel-server on the server:

RPCNFSDCOUNT=8  RPCNFSDPRIORITY=0  RPCMOUNTDOPTS="--manage-gids --no-nfs-version 3"  NEED_SVCGSSD="yes"  RPCSVCGSSDOPTS=""  

10- Then, after rebooting both server and client, I tried to mount the share (as root user):

mount -t nfs4 -o sec=krb5 nfsv4test.subnet.example.org:/srv/kerbnfs4/homes /media/kerbhomes -vvvv   

But sadly, the mount doesn't work. I don't get access. On the first try, it takes quite long and this is the output:

root@nfsv4client:~# mount -t nfs4 -o sec=krb5 nfsv4test.subnet.example.org:/srv/kerbnfs4/homes /media/kerbhomes  mount.nfs4: timeout set for Wed Dec 15 15:38:09 2021  mount.nfs4: trying text-based options 'sec=krb5,vers=4.2,addr=********,clientaddr=*******'  mount.nfs4: mount(2): Permission denied  mount.nfs4: access denied by server while mounting nfsv4test.subnet.example.org:/srv/kerbnfs4/homes  

Chapter II: Debugging

For a more detailed log, I ran

rpcdebug -m nfsd -s lockd  rpcdebug -m rpc -s call  

on the server but I don't get really that much logs.

However, when trying to mount, syslog tells me that:

Dec  6 11:20:02 testserver kernel: [ 2088.771800] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.771808] svc: svc_authenticate (0)  Dec  6 11:20:02 testserver kernel: [ 2088.771811] svc: calling dispatcher  Dec  6 11:20:02 testserver kernel: [ 2088.771840] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.773222] svc: server 00000000c1c7fb25, pool 0, transport 00000000fc9bd395, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.774697] svc: server 00000000c1c7fb25, pool 0, transport 00000000fc9bd395, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.774705] svc: svc_authenticate (6)  Dec  6 11:20:02 testserver kernel: [ 2088.774711] RPC:       Want update, refage=120, age=0  Dec  6 11:20:02 testserver kernel: [ 2088.774712] svc: svc_process close  [... 7x same message ]  Dec  6 11:20:02 testserver kernel: [ 2088.791514] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.791519] svc: svc_authenticate (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791521] svc: authentication failed (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791538] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.791913] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.791918] svc: svc_authenticate (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791920] svc: authentication failed (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791940] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.792292] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.792296] svc: svc_authenticate (1)  Dec  6 11:20:02 testserver kernel: [ 2088.792298] svc: authentication failed (1)  Dec  6 11:20:02 testserver kernel: [ 2088.792316] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  

As this didn't really help me at all, I recorded the traffic with tcpdump, which gives me this:

11:12:02.856200 IP ip-client.740 > ip-server.nfs: Flags [S], seq 763536441, win 65160, options [mss 1460,sackOK,TS val 2364952579 ecr 2826266858,nop,wscale 7], length 0  11:12:02.856295 IP ip-server.nfs > ip-client.740: Flags [S.], seq 2444950221, ack 763536442, win 65160, options [mss 1460,sackOK,TS val 2826266858 ecr 2364952579,nop,wscale 7], length 0  11:12:02.856304 IP ip-client.740 > ip-server.nfs: Flags [.], ack 1, win 510, options [nop,nop,TS val 2364952579 ecr 2826266858], length 0  11:12:02.856324 IP ip-client.740 > ip-server.nfs: Flags [P.], seq 1:245, ack 1, win 510, options [nop,nop,TS val 2364952579 ecr 2826266858], length 244: NFS request xid 4035461122 240 getattr fh 0,2/42  11:12:02.856408 IP ip-server.nfs > ip-client.740: Flags [.], ack 245, win 508, options [nop,nop,TS val 2826266858 ecr 2364952579], length 0  11:12:02.856421 IP ip-server.nfs > ip-client.740: Flags [P.], seq 1:25, ack 245, win 508, options [nop,nop,TS val 2826266858 ecr 2364952579], length 24: NFS reply xid 4035461122 reply ERR 20: Auth Bogus Credentials (seal broken)  11:12:02.856425 IP ip-client.740 > ip-server.nfs: Flags [.], ack 25, win 510, options [nop,nop,TS val 2364952579 ecr 2826266858], length 0  11:12:02.867582 IP ip-client.740 > ip-server.nfs: Flags [F.], seq 245, ack 25, win 510, options [nop,nop,TS val 2364952590 ecr 2826266858], length 0  11:12:02.867751 IP ip-server.nfs > ip-client.740: Flags [F.], seq 25, ack 246, win 508, options [nop,nop,TS val 2826266869 ecr 2364952590], length 0  11:12:02.867759 IP ip-client.740 > ip-server.nfs: Flags [.], ack 26, win 510, options [nop,nop,TS val 2364952590 ecr 2826266869], length 0  

(I redacted the real ip addresses)

So the interesting part here is the Auth Bogus (Seal broken)? Is there really something bad or is it just the error which appears when something is wrong? I couldn't find anything helpful about this error on the web.

So to come back to Kerberos itself, the keytab seems to be ok:

root@nfsv4client:~# klist -k -e  Keytab name: FILE:/etc/krb5.keytab  KVNO Principal  ---- --------------------------------------------------------------------------     5 nfs/nfsv4test.subnet.example.org@REALM.EXAMPLE.ORG (aes256-cts-hmac-sha1-96)     6 host/nfsv4test.subnet.example.org@REALM.EXAMPLE.ORG (aes256-cts-hmac-sha1-96)  

On this page it's stated that the keytab file can be tested with

kinit -k `hostname -s`$  

which equals to

kinit -k nfsv4client  

on my machine, but that doesn't work at all:

root@nfsv4client:~# kinit -k nfsv4client  kinit: Keytab contains no suitable keys for nfsv4client@REALM.EXAMPLE.ORG while getting initial credentials.  

So I what I don't get is why the keytab has no mapping to a user, or to the wrong user? That's a point where I'm completely lost.

Chapter III: The question

The principals are there in the keytab. So when the client asks the server about the NFS share and tries to access it, both should have the keys to interact with each other. But for some reason it doesn't work. May it be because of the assignment of the principals to the user accounts?

How can I get this to work? How do I get better infos when debugging? Sorry for the wall of china of text.

PS. I mainly followed this tutorial. It seemed like a perfect match for my enviroment..

An I/O error occurred while reading from the JWK Set source: PKIX path building failed: unable to find valid certification path to requested target

Posted: 15 Dec 2021 08:31 AM PST

I am trying to test my spring boot application in my local with http://localhost:8294/test but keep getting the certificate error. It was working until last week and all of sudden stopped working. I am able to hit application on remote servers with no issues. Also, I am able to hit Actuator with /actuator which doesn't need any token. My /test end point secured by Oauth2 and I am passing Bearer token as well.

I wonder why http end point complaining of certificate missing.

Please help.

Windows clients slowly lose access to network resources until I give them a new MAC address

Posted: 15 Dec 2021 07:41 AM PST

One of my clients has their domain controllers running as 2 VMs on VMware ESXi 5.5 at their head office, and there are 4 other branch offices. All the offices connect back to the Head Office via site-to-site VPN using Sophos XG/XGS firewalls. DHCP for each office is handled by it's local Sophos Firewall.

All Branch Offices have distinct network IDs (192.168.x.1).

Branch offices A, B and C have no issues connecting to all domain resources at the Head Office. E.g. domain authentication, network file shares.

Branch office D, however, has fundamental issues. When a new client is first set up at Branch Office D, it works fine. It is able to authenticate and join the domain ad well as access network file shares. However, within a month or two, domain connectivity almost completely stops working. The first symptom is that network file shares stop responding. Whenever you try to access a network mapped drive, the green progress bar starts moving across slowly until it errors out or just displays an empty explorer window. Domain authentication is the next thing to stop working. Logging into the same PC/server with a domain account becomes impossible because the domain controller cannot be reached. I had set up an RODC at Branch Office D and it barely stayed online properly for one weekend. Thereafter, every logon attempt ended with "RPC failed" on the lock screen.

While doing some on-site troubleshooting the other day on a critical PC that lost network file share access, I decided to partition the hard drive and install a fresh copy of Windows 10 and try to access the same location and determine whether the issues were a result of an issue specific to that Windows instance. However, the test still failed. I then plugged that same network cable into my field laptop and I was able to access the said network shares by providing valid domain credentials. I had earlier tried new IP addresses on affected PCs with no success, so I began to suspect that the MAC address was getting blocked.

I looked up how to manually set a custom MAC address within the Network Card Advanced properties page, and once each PC was granted network access by the firewall, all connectivity was fully restored, including domain traffic and internet access.

I did lots of further testing involving MAC addresses and IP addresses and discovered that only a new combination of a new MAC and a new IP address was being allowed through the firewall properly and completely.

Trouble is that I feel like I will have to assign a new combination of MAC and IP sooner rather than later, once network access goes down again.

I am a very new Sophos user, but I would like to understand what might be going on here? Are there any flood mitigation or anti-spoofing settings or rules that might be causing this? Branch Offices A, B, and C have the same firewall config and vendor, but none of these issues.

Any and all help will be greatly appreciated!

SIGSEV when starting Opendkim on raspberry PI

Posted: 15 Dec 2021 07:38 AM PST

on an old Raspbian (debian on raspberry) starting OpenDKIM by hand, I get a segmentation fault.

open("/dev/urandom", O_RDONLY|O_LARGEFILE) = 3  fcntl64(3, F_GETFD)                     = 0  fcntl64(3, F_SETFD, FD_CLOEXEC)         = 0  fstat64(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 9), ...}) = 0  getrusage(0x1 /* RUSAGE_??? */, {ru_utime={0, 40000}, ru_stime={0, 30000}, ...}) = 0  getpid()                                = 19730  read(3, "s\276F\10[\36O<x\300\261\200q\240/\31a\354\211$K)\342o\202\255\17\30\231\244\271\32", 32) = 32  --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x10} ---  +++ killed by SIGSEGV +++  Segmentation fault  

Obviously, it seems related to /dev/urandom, or any task occuring just after. Did anybody had any clue on that? I know the architecture is not so common, but raspi is a good candidate for a local SMTP host :)

Teams meeting screen is frozen: ,,CallingCalendarFromConversation: Prerequisite failed, invalid conversationid or feature flag is off"

Posted: 15 Dec 2021 07:04 AM PST

the screen of our employee is during a teams meeting frozen if he share his screen in the meeting. He said he starts the meeting from the teams calender and in the log file I found the error message: ,,..CallingCalendarFromConversation: Prerequisite failed, invalid conversationid or feature flag is off..". What does the error mean and what can I do that the screen is in future not frozen? I told him that he should start the meeting from the link or outlook not from teams calender. I would be very grateful for any help.

Regards, Hakikat

Migrating OpenLDAP data from 2.4 to 2.5

Posted: 15 Dec 2021 06:54 AM PST

I have gone through documentation online and on some forums but I am stuck on importing data from ldap 2.4 to 2.5 (Migrating to a new server as well). Here are the steps I did and the error I am receiving. (There were multiple other errors but that is fixed now

Installation that I performed for 2.5:

sudo ./configure --prefix=/usr --sysconfdir=/etc --disable-static --enable-debug --with-tls=openssl --with-cyrus-sasl --enable-dynamic /  --enable-crypt --enable-spasswd --enable-slapd --enable-modules --enable-rlookups --enable-overlays=yes --enable-ldap=yes /  --enable-ppolicy=yes --enable-accesslog=yes --enable-mdb=yes --disable-ndb --disable-sql  

Slaptest my slapd.conf file :

slaptest -f /etc/openldap/slapd.conf.template -F /etc/openldap/slapd.d -u  config file testing succeeded    slaptest -f /etc/openldap/slapd.conf.template -F /etc/openldap/slapd.d  config file testing succeeded  

After this is completed there is some content under /etc/openldap/slapd.d, and I changed the permission to the ldap user. The content:

'cn=config'  'cn=config.ldif'  

##Now I want to import my data file with slapadd. With -u (dryrun) there are no errors but without it I am receiving the following:

slapadd -n 1 -F /etc/openldap/slapd.d -l data.ldif  mdb_id2entry_put: mdb_put failed: MDB_KEYEXIST: Key/data pair already exists(-30799) "dc=test,dc=com"  => mdb_tool_entry_put: id2entry_add failed: err=-30799  => mdb_tool_entry_put: txn_aborted! MDB_KEYEXIST: Key/data pair already exists (-30799)  slapadd: could not add entry dn="dc=test,dc=com" (line=1): txn_aborted! MDB_KEYEXIST: Key/data pair already exists (-30799)  Closing DB...  

Any suggestions please?. Thanks

Unable to add Domain user to SharePoint Farm Administrator

Posted: 15 Dec 2021 06:41 AM PST

We have SharePoint 2019, Installed on a domain client machine when we tried to add domain user it is not showing in result, only local users and group are showing. any clue?

Routing traffic via proxy to a specific adapter?

Posted: 15 Dec 2021 08:54 AM PST

I am using squid proxy running on 127.0.0.1:3128, I am trying to route all traffic that is going through my proxy server, to use a specific network adapter.

Command:

iptables -t nat -A PREROUTING --dst 127.0.0.1 -p tcp --dport 3128 -j DNAT --to-destination 192.168.43.76:3128  

I have tried the following with no luck, 192.168.43.76 being the local Ipv4 of the adapter.

ip r show table all displays:

default via 127.0.0.1 dev lo table 3   default via 192.168.1.254 dev enp4s0 proto dhcp metric 100   default via 192.168.43.1 dev wlp3s0 proto dhcp metric 600   169.254.0.0/16 dev enp4s0 scope link metric 1000   192.168.1.0/24 dev enp4s0 proto kernel scope link src 192.168.1.210 metric 100   192.168.43.0/24 dev wlp3s0 proto kernel scope link src 192.168.43.76 metric 600   broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1   local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1   local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1   broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1   broadcast 192.168.1.0 dev enp4s0 table local proto kernel scope link src 192.168.1.210   local 192.168.1.210 dev enp4s0 table local proto kernel scope host src 192.168.1.210   broadcast 192.168.1.255 dev enp4s0 table local proto kernel scope link src 192.168.1.210   broadcast 192.168.43.0 dev wlp3s0 table local proto kernel scope link src 192.168.43.76   local 192.168.43.76 dev wlp3s0 table local proto kernel scope host src 192.168.43.76   broadcast 192.168.43.255 dev wlp3s0 table local proto kernel scope link src 192.168.43.76   2a00:23c7:3c05:7c01::/64 dev enp4s0 proto ra metric 100 pref medium  fe80::/64 dev enp4s0 proto kernel metric 100 pref medium  fe80::/64 dev wlp3s0 proto kernel metric 600 pref medium  default via fe80::8e19:b5ff:fe44:5852 dev enp4s0 proto ra metric 20100 pref medium  local ::1 dev lo table local proto kernel metric 0 pref medium  local 2a00:23c7:3c05:7c01:3763:cf65:958b:dfe9 dev enp4s0 table local proto kernel metric 0 pref medium  local 2a00:23c7:3c05:7c01:d5e5:e0ff:34a5:d2b9 dev enp4s0 table local proto kernel metric 0 pref medium  local fe80::54c1:a358:586:b470 dev wlp3s0 table local proto kernel metric 0 pref medium  local fe80::ae7d:6464:5c50:e88f dev enp4s0 table local proto kernel metric 0 pref medium  multicast ff00::/8 dev enp4s0 table local proto kernel metric 256 pref medium  multicast ff00::/8 dev wlp3s0 table local proto kernel metric 256 pref medium  

I'm trying to use default via 192.168.43.1 dev wlp3s0 proto dhcp metric 600 as the adapter, which is using WiFi, while allowing everything else outside the proxy to still use ethernet / default adapter.

What could be the cause of this Chkdsk mid-message deadlock?

Posted: 15 Dec 2021 05:55 AM PST

What could cause the following chkdsk result:

chkdsk /b /f /v /scan c:    The type of the file system is NTFS.  Cannot lock current drive.    Chkdsk cannot run because the volume is in use by another  process.  Would you like to schedule this volume to be  

This is normal when trying to check the system drive (you're asked to re-schedule it for the next reboot). The message normally continues with

checked the next time the system restarts? (Y/N) Y  

but the third line in my case is missing. Chkdsk never prints it and won't respond to user input, so I can't schedule the custom scan. Chkdsk appears to be stuck in a deadlock. It's not waiting on any resources, CPU usage is 0%, memory usage only 920KiB.
Here's a stack trace for the input thread:

ntoskrnl.exe!KeSynchronizeExecution+0x2106  ntoskrnl.exe!KeWaitForMultipleObjects+0x135e  ntoskrnl.exe!KeWaitForMultipleObjects+0xdd9  ntoskrnl.exe!KeWaitForMutexObject+0x373  ntoskrnl.exe!KeStallWhileFrozen+0x1feb  ntoskrnl.exe!KeIsAttachedProcess+0x229  ntoskrnl.exe!KeWaitForMultipleObjects+0x152f  ntoskrnl.exe!KeWaitForMultipleObjects+0xdd9  ntoskrnl.exe!KeWaitForMutexObject+0x373  ntoskrnl.exe!NtWaitForSingleObject+0xb2  ntoskrnl.exe!setjmpex+0x6553  ntdll.dll!ZwWaitForSingleObject+0xa  ifsutil.dll!BLOCK_CACHE::Initialize+0x2fe  KERNEL32.DLL!BaseThreadInitThunk+0x22  ntdll.dll!RtlUserThreadStart+0x34  

And here's one for the main thread:

ntoskrnl.exe!KeSynchronizeExecution+0x2106  ntoskrnl.exe!KeWaitForMultipleObjects+0x135e  ntoskrnl.exe!KeWaitForMultipleObjects+0xdd9  ntoskrnl.exe!KeWaitForMutexObject+0x373  ntoskrnl.exe!FsRtlInitializeOplock+0x3d1  ntoskrnl.exe!NtReadFile+0x664  ntoskrnl.exe!setjmpex+0x6553  ntdll.dll!ZwReadFile+0xa  KERNELBASE.dll!ReadFile+0x78  ulib.dll!KEYBOARD::EnableLineMode+0xc8  ulib.dll!PROGRAM::GetStandardOutput+0x18e  ulib.dll!STREAM::ReadLine+0x13d  ulib.dll!CHKDSK_MESSAGE::IsYesResponse+0x232  ulib.dll!CHKDSK_MESSAGE::IsYesResponse+0xbd  UNTFS.DLL!ChkdskEx+0x61e  chkdsk.exe+0x2c4f  chkdsk.exe+0x3e6c  KERNEL32.DLL!BaseThreadInitThunk+0x22  ntdll.dll!RtlUserThreadStart+0x34  

why don't DHCP discover/ARP messages amplify and reverberate in WANs?

Posted: 15 Dec 2021 06:17 AM PST

I don't understand how the ISP can assign Public IPs to routers that newly join their network, without having DHCP or ARP messages amplified millionfold.

As far as I know, for a L3 router to join a network at all, the joining entity has to talk with the DHCP server to get an IP address. DHCP discover messages are broadcast with MAC FF:FF:FF:FF:FF:FF, and to the whole subnet. And so, if the router is newly connected to a WAN with thousands, if not ~100000s of other routers, I would imagine the result to be a DHCP discover message that reverberates and amplifies until its TTL expires - which is certain to either a) fail to reach target or b) cause millions, not if billions, of other messages.

And, facing the same direction, I can apply the same argument to ARPs. ARP messages are broadcast as well around the network, just like the DHCP discover, and so the same set of problems would arise.

I can probably apply the same argument to messages used by the Network layer to coordinate its routers with distance-vector algorithm, unless the routers are somehow organized in a tree or graph-like manner, but I digress.

Where have I gone wrong?

Connect SSH Tunnel with the Java Desktop program (.jar) to remote server

Posted: 15 Dec 2021 07:45 AM PST

I developed a JavaFx Desktop program the employees of the company. Now, they want to use the program in their houses with their own personal computers. The program has MySQL and FTP services.

I need to use SSH Tunnel or VPN so that the program can connect from outside to the remote server in the office(port forwarding for FTP and MySQL).

If I want to use SSH Tunnel I have to install(or copy/paste) the certificates in the own employees' computers and I think that this option is dangerous because of the certificates can be engaged to attacks of their computers.

Sometimes I have thought to create one certificate for each employee (100 people) to control better who is connected in each time, but it's too much laborious to maintain.

I would like to use SSH Tunnel but I don't know if the best option in this situation.

What other options can I use to connect my program to remote server securely?

Simple solution for PDF storage

Posted: 15 Dec 2021 05:47 AM PST

I'm trying to make a PDF storage that can be accessed by anyone who has the URL to the specific PDF file. Basically I want to be able to hand out just a URL to a person and that leads to the PDF file. I'm looking for simple and efficient solution to this. I have 2 VPS server each running on Ubuntu.

(Each PDF file needs to be accessed by URL)

Can't connect to mysql docker when using phpmyadmin docker

Posted: 15 Dec 2021 09:16 AM PST

I'm just getting started in docker and maybe I'm starting of a little big but I found an article that explained out to get a coldfusion install (run by commandbox) up with mysql. This docker compose works just fine. I had the idea of adding in phpmyadmin so that I can us that to connect to mysql.

For reference the original article is here: https://cfswarm.inleague.io/part3-docker-in-development/part3-running-docker

So I modified the docker compose yml pull in the phpmyadmin

    version: '3.6'  # if no version is specificed then v1 is assumed. Recommend v2 minimum  volumes:    sql-data:  networks:    cfswarm-simple:  secrets:    cfconfig:      file: ./config/cfml/cfconfig.json    services:           cfswarm-mysql:        # a friendly name. this is also DNS name inside network      image: mysql:5.7      container_name: cfswarm-mysql      environment:        MYSQL_ROOT_PASSWORD: 'myAwesomePassword'        MYSQL_DATABASE: 'cfswarm-simple-dev'        MYSQL_ROOT_HOST: '%'        MYSQL_LOG_CONSOLE: 'true'      volumes:            - type: volume          source: sql-data          target: /var/lib/mysql      ports:         - 3306:3306      networks:          - cfswarm-simple    cfswarm-cfml:      image: ortussolutions/commandbox:alpine      container_name: cfswarm-cfml      volumes:        - type: bind          source: ./app-one          target: /app      ports:         - 8081:8080              env_file:        - ./config/cfml/simple-cfml.env      secrets:        - source: cfconfig # this isn't really a secret but non-stack deploys don't support configs so let's make it one          target: cfconfig.json      networks:          - cfswarm-simple      depends_on:        - cfswarm-mysql        - cfswarm-nginx    cfswarm-two-cfml:      image: ortussolutions/commandbox:alpine      container_name: cfswarm-two-cfml      volumes:        - type: bind          source: ./app-two          target: /app      env_file:        - ./config/cfml/simple-cfml.env      secrets:        - source: cfconfig # this isn't really a secret but non-stack deploys don't support configs so let's make it one          target: cfconfig.json      depends_on:        - cfswarm-mysql        - cfswarm-nginx      networks:          - cfswarm-simple    **phpmyadmin:          image: phpmyadmin/phpmyadmin:latest          container_name: phpmyadmin          restart: always          environment:            PMA_HOST: cfswarm-mysql            PMA_USER: root            PMA_PASSWORD: 'myAwesomePassword'          ports:               - "8082:80"**           cfswarm-nginx:      image: nginx      command: [nginx-debug, '-g', 'daemon off;']      container_name: cfswarm-nginx      ports:        - 80:80        - 443:443      volumes:        - type: bind          source: ./app-one          target: /var/www/app-one        - type: bind          source: ./app-two          target: /var/www/app-two        - type: bind          source: ./nginx/          target: /etc/nginx      networks:        - cfswarm-simple  

So right at line 63 I added in the pull for phpymyadmin which appears to work, it does answer on port 8082 but it gives me an error:

MySQL said: Documentation

Cannot connect: invalid settings.   mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution   mysqli::real_connect(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution   phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server.  

The one thing that I could not get to work was adding in the network of cfswarm-simple. When I tried to add the line right under ports (line 72) I would get an error when trying to start the docker compose.

Right now, I'd like to be able to connect to the mysql docker with the docker of phpmyadmin.

TIA

Two PDCs, two ADs, two domains - how to replicate one domain/AD to the other?

Posted: 15 Dec 2021 07:37 AM PST

Here's the history:

SERVER2 was a 2016 Essentials Edition server, standalone with no other DCs. The OS became corrupted in a few areas, and so a decision was made to replace it. A standalone clean install wasn't an option, as applications running on member servers rely heavily on AD user SIDs.

So a second DC was introduced, SERVER3, and the domain/AD/DNS/PDC/fsmo were replicated from SERVER2 to SERVER3. Metadata cleanup was performed on SERVER3 to rid it of any old references to SERVER2. SERVER2 has now been taken permanently offline.

A brand new SERVER2 Essentials Edition has been configured, and it has its own domain/AD/DNS/PDC/fsmo. The display names of the two domains are the same, but the underlying ADs are of course different.

How do I make the new SERVER2 a BDC for SERVER3, replicate everything from SERVER3 to the new SERVER2, and then promote the new SERVER2 to be PDC?

I had some expert assistance to get this far, but unfortunately the tech has been called away. I'm now on my own, mid-project.

Please advise.

--EDIT--

I found this guidance, but it doesn't seem to take into account that I have two PDCs on separate existing domains.

Sinatra + Thin + Nginx connect() failed (111: Connection refused) while connecting to upstream

Posted: 15 Dec 2021 06:14 AM PST

I have a Sinatra app that is running on Thin with Nginx as a reverse proxy and receives a lot of traffic. My users are reporting 502 errors and looking at the Nginx logs i see a lot of these:

[warn] upstream server temporarily disabled while connecting to upstream  [error] connect() failed (111: Connection refused) while connecting to upstream  

If i look at the logs from the Sinatra app i see no errors.

I am starting Thin server with the following:

--max-conns 15360 --max-persistent-conns 2048 --threaded start  

I have set the following for Ninx:

worker_processes  auto;  worker_rlimit_nofile 65535;    events {      worker_connections  15360;  }  

The host file for the Sinatra app:

server {      server_name my_sinatra_app;        #lots of bots try to find vulnerabilities in php sites      location ~ \.php {          return 404;      }        location / {          proxy_pass http://localhost:6903;          proxy_http_version 1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection 'upgrade';          proxy_set_header Host $host;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_cache_bypass $http_upgrade;            #increase buffers          proxy_buffer_size          128k;          proxy_buffers              4 256k;          proxy_busy_buffers_size    256k;      }        listen 443 ssl; # managed by Certbot      #...      #SSL stuff  }  

Why is this happening? Too much traffic?

What's the solution? Do I keep increasing the worker_connections and --max-conns until the errors stop?

The output of htop seems like the server can handle more:

htop

Any insight/advice?

EDIT

While i don't see any errors in the Sinatra log or systemctl status output, i did notice that the service never runs for very long so it seems Thin server is crashing often. Any idea how i can debug this further?

How to determine why a static external IP address changed in GCP?

Posted: 15 Dec 2021 07:49 AM PST

I noticed that a static external IP address changed in GCP in our project. I'm trying to determine why and when, and i'm not finding any useful information in the google console.

Is there any way to view history of a external IP? Creation date, deletion date, etc?

Has anyone heard or experienced Google changing a static external IP address? If so, what were the circumstances?

Edit:

To clarify, yes this is a reserved static ip address. I'm thinking that some piece of automation deleted it and re-created it at some point, hence the question about any history that google keeps around these addresses. We are just having trouble tracking down what happened, so that we can ensure it doesn't happen again.

The only other possibility i can think of is a bug on google's side, hence the question about anyone hearing about this happening.

It had sat unused for a while, which would allow the possibility of either of those things.

linux bridge two NICs with multiple VLANs and assign virtual IP

Posted: 15 Dec 2021 07:53 AM PST

I'm trying to do some testing of linux bridging. I have a server with two NICs (eth1/eth2) and i want to bridge together, use multiple VLAN tags and assign an IP to a virtual interface in each VLAN for me to ping.

I have this so far:

ip link add br0 type bridge vlan_filtering 1  bridge vlan add dev br0 vid 1000 self  bridge vlan add dev br0 vid 1001 self  bridge vlan add dev eth1 vid 1000 pvid  bridge vlan add dev eth2 vid 1000 pvid  bridge vlan add dev eth1 vid 1001 pvid  bridge vlan add dev eth2 vid 1001 pvid  

The bridge looks ok to me

bash-5.0# bridge vlan  port    vlan ids  eth1     1000 PVID           1001 PVID    eth2     1000 PVID           1001 PVID    br0  1000 PVID       1001 PVID  

But now i want to put something i can ping into vlan 1000 and vlan1001 to test Was trying to do this with a dummy interface but can't seem to make that work

Any tips? I believe the bridge config is good. We're expecting everything to be tagged

Ingress nginx-controller - failed for volume “webhook-cert”

Posted: 15 Dec 2021 08:54 AM PST

I runed kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/aws/deploy.yaml

But it didn't run.

Events:  Type     Reason       Age                     From               Message  ----     ------       ----                    ----               -------    Normal   Scheduled    8m56s                   default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-controller-68649d49b8-g5r58 to ip-10-40-0-32.ap-northeast-2.compute.internal    Warning  FailedMount  8m56s (x2 over 8m56s)   kubelet            MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found    Normal   Killing      7m56s                   kubelet            Container controller failed liveness probe, will be restarted    Normal   Pulled       7m45s (x2 over 8m54s)   kubelet            Container image "k8s.gcr.io/ingress-nginx/controller:v0.48.1@sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" already present on machine    Normal   Created      7m45s (x2 over 8m54s)   kubelet            Created container controller    Normal   Started      7m45s (x2 over 8m53s)   kubelet            Started container controller    Warning  Unhealthy    7m16s (x7 over 8m36s)   kubelet            Liveness probe failed: HTTP probe failed with statuscode: 500    Warning  Unhealthy    3m46s (x30 over 8m36s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500  

logs...

Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.  {"level":"info",  "msg":"patching webhook configurations 'ingress-nginx-admission' mutating=false, validating=true, failurePolicy=Fail",  "source":"k8s/k8s.go:39",  "time":"2021-08-17T18:08:40Z"  }  {"err":"the server could not find the requested resource",  "level":"fatal",  "msg":"failed getting validating webhook",  "source":"k8s/k8s.go:48","time":"2021-08-17T18:08:40Z"  }  

I tried changing the deployment's --ingress-class=nginx to --ingress-class=nginx2, or installing v0.35, or i've tried kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx /controller-v0.48.1/deploy/static/provider/baremetal/deploy.yaml

But the same error repeats.

Environment: kubeadm version : v1.22.0 docker version : 20.10.7 os : ubuntu I am using aws ec2 instance.

Docker without sudo in Ubuntu 20.04?

Posted: 15 Dec 2021 08:01 AM PST

I've just installed docker on Ubuntu 20.04 and noticed that docker must be run as sudo.

wolf@linux:~$ docker ps  Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json: dial unix /var/run/docker.sock: connect: permission denied  wolf@linux:~$     wolf@linux:~$ sudo docker ps  CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES  wolf@linux:~$   

Found this tutorial and tried to follow it

Step 2 — Executing the Docker Command Without Sudo (Optional)

https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04

wolf@linux:~$ sudo usermod -aG docker ${USER}  wolf@linux:~$ su - ${USER}  Password:   wolf@linux:~$  

It seems to be fine here.

wolf@linux:~$ id -nG  wolf docker  wolf@linux:~$     wolf@linux:~$ docker ps  CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES  wolf@linux:~$   

However, when I open another terminal, it doesn't work anymore. I'm getting similar error as above.

Kubernetes Pod OOMKilled Issue

Posted: 15 Dec 2021 09:01 AM PST

The scenario is we run some web sites based on an nginx image in kubernetes cluster. When we had our cluster setup with nodes of 2cores and 4GB RAM each. The pods had the following configurations, cpu: 40m and memory: 100MiB. Later, we upgraded our cluster with nodes of 4cores and 8GB RAM each. But kept on getting OOMKilled in every pod. So we increased memory on every pods to around 300MiB and then every thing seems to be working fine.

My question is why does this happen and how do I solve it. P.S. if we revert back to each node being 2cores and 4GB RAM, the pods work just fine with decreased resources of 100MiB.

[alert]: fastcgi request record is too big -Fastcgi receives an error message that the get request is too long

Posted: 15 Dec 2021 08:04 AM PST

Recently, the program encounter ed a problem, LNMP environment, because PHP in the program spliced GET request uri parameter is too long, resulting in an error, nginx processing uri is too long can be processed by client_head_buffer_size parameter, but when nginx received the uri request forward to PHP, fastcgi can not handle the request error (do not consider changing the request to post).

System:linux CentOS7 nginx:1.14.0 php:7.2.0

Apache solution:

Add a few parameters to the configuration file LimitRequestLine 40940 LimitRequestFieldSize 40940

Nginx solution:

**How to increase the limit of fastcgi to accept uri parameter size?**

user              test;  worker_processes  2;  worker_cpu_affinity 0101 1010;  worker_rlimit_nofile 65535;    events {      use epoll;      worker_connections  65535;  }    http {      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                    '$status $body_bytes_sent "$http_referer" '                    '"$http_user_agent" "$http_x_forwarded_for"';  include              mime.types;  default_type application/octet-stream;  charset utf-8;  sendfile                     on;  tcp_nopush                   on;  keepalive_timeout          300s;  client_header_timeout      300s;  client_body_timeout        300s;  client_max_body_size       100m;  client_body_buffer_size   2048k;  client_header_buffer_size  1024k;  large_client_header_buffers 32 64k;  send_timeout               300s;  fastcgi_connect_timeout    300s;  fastcgi_send_timeout       300s;  fastcgi_read_timeout       300s;  fastcgi_buffer_size       1024k;  fastcgi_buffers          64 64k;  fastcgi_busy_buffers_size  2048k;  fastcgi_temp_file_write_size 2048k;  gzip                         on;  gzip_min_length              1k;  gzip_buffers             16 16k;  gzip_http_version           1.1;  gzip_types text/plain text/css application/javascript application/xml;  gzip_comp_level               3;  gzip_vary                    on;  server_names_hash_max_size  512;  server_names_hash_bucket_size  128;  include /*.conf;  }  

Fastcgi error log [1]: https://i.stack.imgur.com/lzV4H.png

Please Help!!! SOS

Exchange 2010: Disable meeting requests on Room Mailbox

Posted: 15 Dec 2021 07:07 AM PST

In Exchange we would like that only the persons who have permissions to book (meeting)requests.

Without permissions, users cannot add a meeting but they can still send a meeting request, this should be disabled. If also possible, users without permissions should not be able to add the room.

At this very moment, everyone can add the room and send a meeting request. This shows up in the room as "Temporary".

Only accounts who have got the "create items" enabled in permissions should be able to see and add meetings to the room.

Enabling or disabling In- or out-policy meeting requests does not do the trick.

How to list users with role cluster-admin in OpenShift?

Posted: 15 Dec 2021 06:10 AM PST

I can add users to the cluster-role "cluster-admin" with:

oc adm policy add-cluster-role-to-user cluster-admin <user>  

But how can I list all users with the role cluster-admin?

Environment: OpenShift 3.x

Powershell Set/Get-GPPermission missing from Group Policy on Windows 10

Posted: 15 Dec 2021 08:04 AM PST

Recently updated from windows 7 enterprise to windows 10 enterprise and went to run a script that has a call to Get-GPPermision and it errored out as missing that command. Edit: Set-GPPermission is also missing.

checking for commands inside the group policy cmdlet shows that yes it is missing:

PS C:\WINDOWS\system32> get-command -Module grouppolicy    CommandType     Name                                               Version    Source                                               -----------     ----                                               -------    ------                                               Cmdlet          Backup-GPO                                         1.0.0.0    GroupPolicy                                          Cmdlet          Copy-GPO                                           1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPInheritance                                  1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPO                                            1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPOReport                                      1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPPrefRegistryValue                            1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPRegistryValue                                1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPResultantSetOfPolicy                         1.0.0.0    GroupPolicy                                          Cmdlet          Get-GPStarterGPO                                   1.0.0.0    GroupPolicy                                          Cmdlet          Import-GPO                                         1.0.0.0    GroupPolicy                                          Cmdlet          New-GPLink                                         1.0.0.0    GroupPolicy                                          Cmdlet          New-GPO                                            1.0.0.0    GroupPolicy                                          Cmdlet          New-GPStarterGPO                                   1.0.0.0    GroupPolicy                                          Cmdlet          Remove-GPLink                                      1.0.0.0    GroupPolicy                                          Cmdlet          Remove-GPO                                         1.0.0.0    GroupPolicy                                          Cmdlet          Remove-GPPrefRegistryValue                         1.0.0.0    GroupPolicy                                          Cmdlet          Remove-GPRegistryValue                             1.0.0.0    GroupPolicy                                          Cmdlet          Rename-GPO                                         1.0.0.0    GroupPolicy                                          Cmdlet          Restore-GPO                                        1.0.0.0    GroupPolicy                                          Cmdlet          Set-GPInheritance                                  1.0.0.0    GroupPolicy                                          Cmdlet          Set-GPLink                                         1.0.0.0    GroupPolicy                                          Cmdlet          Set-GPPrefRegistryValue                            1.0.0.0    GroupPolicy                                          Cmdlet          Set-GPRegistryValue                                1.0.0.0    GroupPolicy        

Here's the version table:

PS C:\WINDOWS\system32> $PSVersionTable    Name                           Value                                                                                               ----                           -----                                                                                               PSVersion                      5.1.14393.693                                                                                       PSEdition                      Desktop                                                                                             PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}                                                                             BuildVersion                   10.0.14393.693                                                                                      CLRVersion                     4.0.30319.42000                                                                                     WSManStackVersion              3.0                                                                                                 PSRemotingProtocolVersion      2.3                                                                                                 SerializationVersion           1.1.0.1  

The latest (posted last month) I can find shows the command stil there: https://technet.microsoft.com/itpro/powershell/windows/group-policy/index

Note: it appears that Microsoft has broken backwards compatibility since the calls were named Get-GPPermissions and Set-GPPermissions in group policy with powershell 4, now they droped the 's' and are both named singular Get-GPPermission and Set-GPPermission.

Anyone know how I can re-install the module?

Edit: module re-install was easy it was just a case of uninstalling RSAT and then re-installing that. Sadly the command is still not showing up so my question should now be how to regain the missing commands.

Load balancing between two (or more) GRE tunnels

Posted: 15 Dec 2021 07:07 AM PST

I have a hosted service (think zScaler™) that is having me send my traffic to it via GRE tunnels. I am given two appliances and want to load balance my traffic between the two tunnels.

I could always statically carve out the network but I would rather not do that.

My proposed solution is that I could create two equal cost routes between the two tunnels but wouldn't this balance on a per-packet basis. Therefore some of the stream would go through one tunnel and some through another. I want to avoid this since it makes troubleshooting difficult, will cause issues with the appliances tracking connections, and will likely caues issues with SSL inspection.

Is there a way, either appliance based or otherwise (I own the security equipment and can stand a load balancer up in front of it) to balance GRE tunnels based on the source IP of the originating client? Therefore client X always goes through GRE tunnel A and client Y goes through GRE tunnel B.

My networking equipment is standard Cisco L3 Switches and ASAs.

Freebsd change default Internet channel route

Posted: 15 Dec 2021 09:01 AM PST

I have two Internet channel and Gateway on freebsd. When I switch channel with the command route change default chan2, the command netstat -nr shows changed default route. But traceroute shows that the packets go through the old route chan1.

Example:

$netstat -nr   Routing tables Internet: Destination Gateway  Flags    Refs   Use  Netif Expire                           default     xxx.xxx.183.54 US 0 8432    em3    $sudo route change default xxx.xxx.144.125   change net default: gateway> xxx.xxx.144.125    $netstat -nr  Routing tables Internet: Destination Gateway Flags Refs Use  Netif Expire                           default     xxx.xxx.144.125   US  2  16450  em3  

BUT

$ traceroute 8.8.8.8  traceroute to 8.8.8.8 (8.8.8.8), 64 hops max, 52 byte packets   1  xxx.xxx.183.53 (xxx.xxx.183.53)  0.527 ms  0.415 ms  0.483ms  

All works if I run the following combination:

$sudo route del default

$sleep 10

$sudo route add default xxx.xxx.144.125

svn: Too many arguments to import command

Posted: 15 Dec 2021 06:05 AM PST

Having a problem with the --message flag to the svn import command. On some servers it works, but on others it gets confused if the message contains spaces, even if you single or double quote the message string thus:

    svn import -m 'New stuff added' https://my-remote-repo/SVN/repo/path  

When it fails, I get the error:

    svn: Too many arguments to import command  

If I limit the message to one without any spaces, it succeeds everytime. Clearly the problem is with the command failing to recognise a quoted string, but why?

Differences between whether it succeeds or not seems to be down to the particular OS/Shell combination I'm using. The command works on SUSE 10.3 with Ksh Version M 93s+ 2008-01-31, but fails on RHEL 5.6 with Ksh Version AJM 93t+ 2010-02-02. Or perhaps that's a red herring, and the real problem is something else differing between environments?

How to monitor mysql slow log and send mail to alert?

Posted: 15 Dec 2021 06:05 AM PST

I have enabled mysql slow query log on Ubuntu server. I prefer to get the email alert with the slow sql when any slow query appeared so I can optimize the sql. I need a lightweight solution.

No comments:

Post a Comment