Saturday, August 14, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Turn String into Command Unix

Posted: 14 Aug 2021 09:57 AM PDT

I have variable and its contains of string

LIST='find . tyoe -f -mmin 50'  

how to turn related string into command ? is it fine using eval ? or there's any substitute for eval because I've been read that eval is not recommended to use..

eval $LIST  

because I plan to do below command,

LST+='-delete'  eval $LST  

Tools to manipulate and generate apng with good compression in linux command line

Posted: 14 Aug 2021 09:41 AM PDT

I have recently working with apng. But apngs are larger than similar gif. Also I cannot find a good tool to manipulate apng. ImageMagick produces png for all modification, like (-color 124). To compare performance, first I generated a mp4, video-only. Then I generated a gif and apng with ffmpeg and ImageMagick. In all cases, apng is larger than gif. Also, ffmpeg generates very poor gif.

Original mp4, size: 37.5 KiB, resolution: 1044x414 , 213 frames, codec: h264, colour: yuv444p

No|Source |Command    |Output |Size | Quality  1 |gif(6) |ffmpegf -i |apng   |5.4M | Poor   2 |gif(5) |convert    |apng   |5.1M | Unchanged  3 |mp4    |convert    |apng   |4.5M | Unchanged  4 |mp4    |ffmpeg -i  |apng   |4.5M | Unchanged  5 |mp4    |convert    |gif    |4.0M | Unchanged  6 |mp4    |ffmpeg -i  |gif    |3.8M | Poor  7 |gif(6) |convert              -layers            optimise    |gif    |3.8M | Poor  8 |apng(4)|ffmpeg -i  |gif    |3.8M | Poor  9 |gif(6) |ffmpeg -i  |apng   |3.8M | Poor   10|gif(11)|convert    |apng   |2.2M | Slightly diminished  11|mp4    |convert    |gif    |1.7M | Slightly diminished             -dither              -colors 64              -layers              optimize    12|gif(11) |ffmpeg -i  |apng  |370K | Slightly diminished  

So, ffmpeg has a very bad gif codec. But it generates smaller apng for given gif. Both generates similar apng from mp4 and larger apng from own gif.

So what I am looking for is a tool that can achive good compression, along with modifing, apng. I may have misssed good flags for ffmpeg or imagemagick.

A question about aliasing

Posted: 14 Aug 2021 09:31 AM PDT

I have alias rm='rm -i' in .bashrc.

Now, if I use rm -i by mistake it will become rm -i -i. Would anything go wrong because of two same options?

Screenshots corrupted on Intel Iris Xe when using Compiz

Posted: 14 Aug 2021 09:08 AM PDT

I'm using Manjaro Linux Xfce on a Lenovo Thinkpad L13 Yoga Gen2. The video chip is an Intel Iris Xe:

$ inxi -G  Graphics:    Device-1: Intel TigerLake-LP GT2 [Iris Xe Graphics] driver: i915 v: kernel     Device-2: Chicony ThinkPad T490 Webcam type: USB driver: uvcvideo     Device-3: Acer Integrated 5M Camera type: USB driver: uvcvideo     Display: x11 server: X.Org 1.20.13 driver: loaded: intel     unloaded: modesetting resolution: 1920x1080~60Hz     OpenGL: renderer: Mesa Intel Xe Graphics (TGL GT2) v: 4.6 Mesa 21.1.6  

In general the video output works without problems, video playback also works fine. However, if I try to take a screenshot, I usually get something which most probably has been displayed on the screen at some point, but isn't what is visible on the screen at the time of taking the screenshot. It looks like some old frame is stuck in some buffer and is then used to constitute the screenshot instead of the current screen's state.

The problem does not appear with Xfwm as window manager, but it quite reproducibly appears with Compiz. It appears with both the Xfce integrated screenshot utility and the screenshot tool Shutter. I'm using X11 as display manager.

I just tried to use a newer kernel (currently 5.14RC) but the problem persists.

Debian Package Manager broken

Posted: 14 Aug 2021 08:58 AM PDT

I am an Elementary OS (Ubuntu) user, for a while now when I try to install something (sudo apt-get install [...]) I get the Unmet dependencies error, I type sudo apt --fix-broken install and I got another error back:

Fixing Sub-process /usr/bin/dpkg returned an error code (1)

This error should be resolved by sudo dpkg --configure -a and I have an dependency error:

dpkg: dependency problems prevent configuration of kaccounts-integration: Package signond is not installed. [...]

So I do sudo apt-get install signond | sudo apt-get install kaccounts-integration and in both I get the error:

Fixing Sub-process /usr/bin/dpkg returned an error code (1)

If I do sudo apt-get install -f

Building dependency tree         Reading state information... Done  Correcting dependencies... Done  The following additional packages will be installed:    signond  The following NEW packages will be installed:    signond  0 upgraded, 1 newly installed, 0 to remove and 54 not upgraded.  3 not fully installed or removed.  Need to get 0 B/166 kB of archives.  After this operation, 616 kB of additional disk space will be used.  Do you want to continue? [Y/n] y  (Reading database ... 306231 files and directories currently installed.)  Preparing to unpack .../signond_8.60+r699+dbusapi1+pkg3~daily~ubuntu5.1.2.1_amd64.deb ...  Unpacking signond (8.60+r699+dbusapi1+pkg3~daily~ubuntu5.1.2.1) ...  dpkg: error processing archive /var/cache/apt/archives/signond_8.60+r699+dbusapi1+pkg3~daily~ubuntu5.1.2.1_amd64.deb (--unpack):   trying to overwrite '/usr/share/dbus-1/services/com.google.code.AccountsSSO.SingleSignOn.service', which is also in package gsignond 1.1.0~r509+pkg4~daily~ubuntu5.0.1  Errors were encountered while processing:   /var/cache/apt/archives/signond_8.60+r699+dbusapi1+pkg3~daily~ubuntu5.1.2.1_amd64.deb  E: Sub-process /usr/bin/dpkg returned an error code (1)```  

How can i change this Value (pts/1)

Posted: 14 Aug 2021 07:34 AM PDT

Image

how can i change pts/1 to a another value.

Can i add a User with adduser and give a own parameter to change this?

Possibilities to Append the Command in Variable

Posted: 14 Aug 2021 08:19 AM PDT

I have a variable that contains a find command

LST_FILE=$(find . -type f \( -name -o '*xml*' -o -name -o '*log*' \) -mmin 180)  

Is it possible to append the command ? I mean like this

LST_FILE+=$(-delete)  

or probably

DEL=$(-delete)  LST_FILE+=${DEL}  

I need to know because I have several find commands that need to perform and it has different options for each commands, so I decided to put the command into variable and plan to append it with of each options regarding to the condition..

Porting Iptables to Nftables firewall with conntrack marks

Posted: 14 Aug 2021 09:50 AM PDT

Hi dear esteemed community,

I'm having a hard time porting my very functional iptables firewall to nftables. No issues with input/output/forward stuffs, it's mainly the conntrack marking. What I currently do is the following:

1/ I create three routing tables with the ip command, along with rules and conntrack marks. Each of them has one default route, either my FDDI, my VPN or my 4G connexion.

ip route add table maincnx default dev $WAN via 192.168.1.2  ip route add table maincnx 192.168.0.0/24 dev $LAN src 192.168.0.1  ip route add table maincnx 192.168.1.0/24 dev $WAN src 192.168.1.1  ip rule add from 192.168.1.2 table maincnx    [[ $VPN ]] && ip route add table vpnclient default dev $VPNIF via $VPNCLIENTIP  [[ $VPN ]] && ip route add table vpnclient $VPNCLIENTROUTE dev $VPNIF src $VPNCLIENTIP  [[ $VPN ]] && ip route add table vpnclient 192.168.0.0/24 dev $LAN src 192.168.0.1  [[ $VPN ]] && ip route add table vpnclient 192.168.1.0/24 dev $WAN src 192.168.1.1  ip rule add from $VPNCLIENTIP table vpnclient    ip route add table altcnx default dev $WAN2 via 192.168.2.2  ip route add table altcnx 192.168.0.0/24 dev $LAN src 192.168.0.1  ip route add table altcnx 192.168.1.0/24 dev $WAN src 192.168.1.1  ip route add table altcnx 192.168.2.0/24 dev $WAN2 src 192.168.2.1  ip rule add from 192.168.2.2 table altcnx    ip rule add from all fwmark 1 table maincnx  [[ $VPN ]] && ip rule add from all fwmark 2 table vpnclient  ip rule add from all fwmark 3 table altcnx  ip route flush cache  

2/ Then, I put some iptables rules together: (I left the comments if anyone is already struggling with the Iptables version)

$IPTABLES -t mangle -A PREROUTING -j CONNMARK --restore-mark # Restore mark previously set  $IPTABLES -t mangle -A PREROUTING -m mark ! --mark 0 -j ACCEPT # If a mark exists: skip  $IPTABLES -t mangle -A PREROUTING -s 192.168.0.5 -p tcp --sport 50001  -j MARK --set-mark 2 # route through VPN  $IPTABLES -t mangle -A PREROUTING -s 192.168.0.3 -j MARK --set-mark 2  $IPTABLES -t mangle -A PREROUTING -s 192.168.0.4 -j MARK --set-mark 3 # route through 4G  $IPTABLES -t mangle -A POSTROUTING -j CONNMARK --save-mark # save marks to avoid retagging  

3/ The associated Postrouting:

$IPTABLES -t nat -A POSTROUTING -o $WAN -j SNAT --to-source 192.168.1.1  $IPTABLES -t nat -A POSTROUTING -o $WAN2 -j SNAT --to-source 192.168.2.1  [[ $VPN ]] && $IPTABLES -t nat -A POSTROUTING -o $VPNIF -j SNAT --to-source $VPNCLIENTIP  

ps: $VPN is obviously a variable set to 1 if the VPN is up & running when the script is launched. There are a few other things to make this work like IP rules cleanup and some prerouting/forward, but it's not the point here, if you're interested, comment, I'll post them in full.

Typology: the gateway has 3 eth: 0/1/2, using ips 192.168.1.1 (FDDI), 192.168.0.1 (LAN), 192.168.2.1 (4G) and the gateways are 192.168.1.2 for FDDI and 192.168.2.2 for 4G, the VPN sits on a TUN0 device which IP is somewhat around 10.8.0.x.

So basically, when 192.168.0.5 in initiating a connexion toward a 50001:tcp port, it is routed through the VPN. 192.168.0.3 is constantly using the VPN whatever it's trying to connect to and 192.168.0.4 is going through the 4G connexion and all others are, by default, using routing table 1 and going through the FDDI connexion.

Question: I'm guessing the Ip part of the job stays the same with nftables but what are the equivalent command in nftables to have the mangling and postrouting done in the same as iptables does it here?

Left Alt+tab and Left Ctrl+Left Shift+tab is not working on my external keyboard

Posted: 14 Aug 2021 06:46 AM PDT

I have been using the EXTERNAL keyboard for the Linux Mint desktop.
Recently Left Alt+tab ( to switch between the currently opened windows ) and LeftCtrl+LeftShift + tab ( to go to the previous tab in chrome ) stopped working.

Note: If I use the Right ( Cntrl, Shift, alt ) keys for the above combination it works fine. And Left ( Cntrl, Shift, alt ) are working (not for the above combinations )

If there is a way to fix it, let me know.
Thanks in advance :D

Linux on USB not showing up on Macbook

Posted: 14 Aug 2021 06:27 AM PDT

Installed Arch linux on a usb, decided to use the systemd bootloader instead of grub.

Installed systemd bootloader like such:

bootctl install  

Partitions are like such:

sdd1 vFat /mnt/boot  sdd2 swap  sdd3 root /mnt  

The sdd1 fat partition is labeled with type EFI when created. Disk label is also what UEFI requires, being gpt in the output of fdisk -l /dev/sdd1. The whole drive, in the sense of fdisk -l /dev/sdd is also labeled as gpt.

Thus far I am still unable to make the usb show up as an entry in the mac native boot menu.

But something like the arch live image does show up with an EFI entry. So far have been booting into that to modify my usb.

Even though I think I have followed everything required of me to make the bootloader partition work in UEFI, I think something is missing before it can show up in the boot menu and work normally.

Generic Fedora Errors

Posted: 14 Aug 2021 05:56 AM PDT

I am running stock Fedora 34 on my computer, and I am getting bombarded with error logs on the system - none of which I am skilled enough to decipher.

I am aware of when the error occurs because my system 'freezes' for a moment before completing a process. Firstly, I went to see if there were any bug threads, but alas I could not find a description fitting the frequent errors I am receiving. Here is the output of my installed Kernel/s:

dnf list installed "kernel-*"  Installed Packages  kernel-core.x86_64                                             5.11.12-300.fc34                                    @anaconda  kernel-core.x86_64                                             5.13.9-200.fc34                                     @updates   kernel-headers.x86_64                                          5.13.3-200.fc34                                     @updates   kernel-modules.x86_64                                          5.11.12-300.fc34                                    @anaconda  kernel-modules.x86_64                                          5.13.9-200.fc34                                     @updates   kernel-modules-extra.x86_64                                    5.11.12-300.fc34                                    @anaconda  kernel-modules-extra.x86_64                                    5.13.9-200.fc34                                     @updates     

I am using 5.11.12, but am actively looking to upgrade following this guide. Based off the error messages (on the first link), is there any method to decipher the fault, or are these "normal" error logs?

Squid caught in loop/cert error

Posted: 14 Aug 2021 05:48 AM PDT

My goal is to have a kind of proxy, where the traffic is shrunk, by disabling ads, resizing images and stuff like that.

Now, I've been struggling with squid for a month, and I'm currently stuck with 2 issues (unknown of they are related) I have imported the .der/pem file as trusted in both windows and on Android (the same result in the browsers)

1. this line in the configuration:

tcp_outgoing_address 192.168.0.1  

are leaving this in the cache.log:

2021/08/14 13:41:55 kid1| commBind Cannot bind socket FD 14 to 192.168.0.1: (99) Cannot assign requested address

I've also tried with my external ip, but the same result

in the access.log I'm seeing this:

14/Aug/2021:13:42:55 +0200 12 192.168.0.2 NONE/200 0 CONNECT 192.168.0.129:4129 -ORIGINAL_DST/192.168.0.129 -

This entry repeats it self as long as the browser is open.

Now, if I'm removing that line in the configuration, another issue will arise and putting squid into some kind of loop, and leaving this is the access log:

14/Aug/2021:13:44:28 +0200 59087 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:4129 - ORIGINAL_DST/192.168.0.129 -

This is a complete loop, that continues utill I kill squid, and it fills up the access.log file, and puts this into the cache.log

2021/08/14 13:47:21 kid1| WARNING! Your cache is running out of filedescriptors

so it don't work with the line, and certainly not without the line

Without the line, nothing happens in the browser, just "awaiting" answer from the server

  1. When I have the line in the config,the browser will say: Did Not Connect: Potential Security Issue / Warning: Potential Security Risk Ahead

when I'm visiting http:

Websites prove their identity via certificates. Firefox does not trust this site because it uses a certificate that is not valid for gr1.se. The certificate is only valid for 192.168.0.129

Error code: SSL_ERROR_BAD_CERT_DOMAIN

i suspect this is because of something with the certs, but I've created them accordingly, so they should work, so something must be of in the configuration:

########  LOGGING #########    debug_options ALL,1 33,2 28,9  logfile_rotate 10  debug_options rotate=0  logformat timereadable %tl %6tr %>a %Ss/%03>Hs %<st %rm %ru %un %Sh/%<A %mt  access_log daemon:/var/log/squid/access.log timereadable       cache_log /var/log/squid/cache.log    ########  ACL #########    #acl dynamic urlpath_regex cgi-bin \?  acl all src all  acl localnet src  192.168.0.0/24  acl purge method PURGE  acl connect method CONNECT    acl step1 at_step SslBump1  acl step2 at_step SslBump2  acl step3 at_step SslBump3    ######  HTTP_ACCESS #########    http_access allow all  #http_access allow manager localhost  #http_access deny manager  #http_access allow localhost      ######  HTTP/HTTPS PORTS & ORHER PORTS #########  http_port 4128   # ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/proxyCA.pem  tls-cafile=/usr/local/squid/etc/ssl_cert/proxyCA.crt capath=/usr/local/squid/etc/rootca/  tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1    https_port 4129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/proxyCA.pem tls-cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt capath=/usr/local/squid/etc/rootca/  tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1,SINGLE_ECDH_USE    ######  SSL / CERT  & FILEPOINTERS #########    sslcrtd_children 5  pid_filename /var/run/squid/squid.pid    icon_directory /usr/share/squid/icons  netdb_filename /var/log/squid/netdb.state  sslcrtd_program /usr/lib/squid/security_file_certgen -s /usr/local/squid/var/logs/ssl_db -M 4MB -b 4096    ssl_bump peek step1  ssl_bump bump all    ######  OUTGOING #########    #tls_outgoing_options tls-cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt   tls_outgoing_options capath=/usr/local/squid/etc/rootca/  tls_outgoing_options options=NO_SSLv3,NO_TLSv1    tcp_outgoing_address 185.157.xxx.xxx    ######  CACHE #########    #shutdown_lifetime 3 seconds  #cache deny dynamic  cache_effective_user proxy  cache_effective_group proxy  #cache_mgr admin@localhost    cache_store_log none  cache_mem 2048 MB  maximum_object_size_in_memory 8192 KB  #memory_replacement_policy heap GDSF  #cache_replacement_policy heap LFUDA  minimum_object_size 0 KB  maximum_object_size 16 MB  cache_dir aufs /cache 10000 16 256  #offline_mode off  cache_swap_low 90  cache_swap_high 95  #cache allow all    ######  REFRESH #########    # Add any of your own refresh_pattern entries above these.  refresh_pattern ^ftp:    1440  20%  10080  refresh_pattern ^gopher:  1440  0%  1440  #refresh_pattern -i (/cgi-bin/|\?) 0  0%  0  #refresh_pattern .    0  20%  4320    ######  Other #########    visible_hostname Socks_V  dns_v4_first on    forwarded_for delete  via off  httpd_suppress_version_string on  uri_whitespace strip    icp_access allow all  

The version of squid (compiled on machine)

Squid Cache: Version 4.13  Service Name: squid  Ubuntu linux    This binary uses OpenSSL 1.1.1f  31 Mar 2020. For legal restrictions on distribution see https://www.openssl.org/source/license.html    configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=${prefix}/lib/x86_64-linux-gnu' '--runstatedir=/run' '--disable-maintainer-mode' '--disable-dependency-tracking' 'BUILDCXXFLAGS=-g -O2 -fdebug-prefix-map=/home/hellfire/build/squid/squid-4.13=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now ' 'BUILDCXX=g++' '--with-build-environment=default' '--enable-build-info=Ubuntu linux' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' '--disable-arch-native' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake,SMB_LM' '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,time_quota,unix_group,wbinfo_group' '--enable-security-cert-validators=fake' '--enable-storeid-rewrite-helpers=file' '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/run/squid.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' '--with-systemd' '--with-openssl' '--enable-ssl-crtd' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/home/hellfire/build/squid/squid-4.13=. -fstack-protector-strong -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now ' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fdebug-prefix-map=/home/hellfire/build/squid/squid-4.13=. -fstack-protector-strong -Wformat -Werror=format-security'  

Output by running: squid -k parse

2021/08/14 14:41:04| Startup: Initializing Authentication Schemes ...  2021/08/14 14:41:04| Startup: Initialized Authentication Scheme 'basic'  2021/08/14 14:41:04| Startup: Initialized Authentication Scheme 'digest'  2021/08/14 14:41:04| Startup: Initialized Authentication Scheme 'negotiate'  2021/08/14 14:41:04| Startup: Initialized Authentication Scheme 'ntlm'  2021/08/14 14:41:04| Startup: Initialized Authentication.  2021/08/14 14:41:04| Processing Configuration File: /etc/squid/squid.conf (depth 0)  2021/08/14 14:41:04| Processing: debug_options ALL,1 33,2 28,9  2021/08/14 14:41:04| Processing: logfile_rotate 10  2021/08/14 14:41:04| Processing: debug_options rotate=0  2021/08/14 14:41:04| Processing: logformat timereadable %tl %6tr %>a %Ss/%03>Hs %<st %rm %ru %un %Sh/%<A %mt  2021/08/14 14:41:04| Processing: access_log daemon:/var/log/squid/access.log timereadable  2021/08/14 14:41:04| Processing: cache_log /var/log/squid/cache.log  2021/08/14 14:41:04| Processing: acl all src all  2021/08/14 14:41:04| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'  2021/08/14 14:41:04| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable  2021/08/14 14:41:04| WARNING: You should probably remove '::/0' from the ACL named 'all'  2021/08/14 14:41:04| Processing: acl localnet src  192.168.0.0/24  2021/08/14 14:41:04| Processing: acl purge method PURGE  2021/08/14 14:41:04| Processing: acl connect method CONNECT  2021/08/14 14:41:04| Processing: acl step1 at_step SslBump1  2021/08/14 14:41:04| Processing: acl step2 at_step SslBump2  2021/08/14 14:41:04| Processing: acl step3 at_step SslBump3  2021/08/14 14:41:04| Processing: http_access allow all  2021/08/14 14:41:04| Processing: http_port 4128  2021/08/14 14:41:04| Processing: https_port 4129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/proxyCA.pem tls-cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt capath=/usr/local/squid/etc/rootca/  tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1,SINGLE_ECDH_USE  2021/08/14 14:41:04| Starting Authentication on port [::]:4129  2021/08/14 14:41:04| Disabling Authentication on port [::]:4129 (interception enabled)  2021/08/14 14:41:09| Processing: sslcrtd_children 5  2021/08/14 14:41:09| Processing: pid_filename /var/run/squid/squid.pid  2021/08/14 14:41:09| Processing: icon_directory /usr/share/squid/icons  2021/08/14 14:41:09| Processing: netdb_filename /var/log/squid/netdb.state  2021/08/14 14:41:09| Processing: sslcrtd_program /usr/lib/squid/security_file_certgen -s /usr/local/squid/var/logs/ssl_db -M 4MB -b 4096  2021/08/14 14:41:09| Processing: ssl_bump peek step1  2021/08/14 14:41:09| Processing: ssl_bump bump all  2021/08/14 14:41:09| Processing: tls_outgoing_options capath=/usr/local/squid/etc/rootca/  2021/08/14 14:41:09| Processing: tls_outgoing_options options=NO_SSLv3,NO_TLSv1  2021/08/14 14:41:09| Processing: tcp_outgoing_address 192.168.0.1  2021/08/14 14:41:09| Processing: cache_effective_user proxy  2021/08/14 14:41:09| Processing: cache_effective_group proxy  2021/08/14 14:41:09| Processing: cache_store_log none  2021/08/14 14:41:09| Processing: cache_mem 2048 MB  2021/08/14 14:41:09| Processing: maximum_object_size_in_memory 8192 KB  2021/08/14 14:41:09| Processing: minimum_object_size 0 KB  2021/08/14 14:41:09| Processing: maximum_object_size 16 MB  2021/08/14 14:41:09| Processing: cache_dir aufs /cache 10000 16 256  2021/08/14 14:41:09| Processing: cache_swap_low 90  2021/08/14 14:41:09| Processing: cache_swap_high 95  2021/08/14 14:41:09| Processing: refresh_pattern ^ftp:    1440  20%  10080  2021/08/14 14:41:09| Processing: refresh_pattern ^gopher:  1440  0%  1440  2021/08/14 14:41:09| Processing: visible_hostname Socks_V  2021/08/14 14:41:09| Processing: dns_v4_first on  2021/08/14 14:41:09| Processing: forwarded_for delete  2021/08/14 14:41:09| Processing: via off  2021/08/14 14:41:09| Processing: httpd_suppress_version_string on  2021/08/14 14:41:09| Processing: uri_whitespace strip  2021/08/14 14:41:09| Processing: icp_access allow all  2021/08/14 14:41:09| WARNING: HTTP requires the use of Via  2021/08/14 14:41:09| Initializing https:// proxy context  2021/08/14 14:41:09| Initializing https_port [::]:4129 TLS contexts  2021/08/14 14:41:09| Using certificate in /usr/local/squid/etc/ssl_cert/proxyCA.pem  2021/08/14 14:41:09| Using certificate chain in /usr/local/squid/etc/ssl_cert/proxyCA.pem  2021/08/14 14:41:09| Adding issuer CA: /C=se/ST=n/O=Internet Widgits Pty Ltd/CN=proxy.local  2021/08/14 14:41:09| Using key in /usr/local/squid/etc/ssl_cert/proxyCA.pem  

Running on virtualbox, Ubuntu 20.04

.wow, that was a lot of information, and to be honest, I hardly understand half of it..

anybody who recognize these errors/issues?

Kernel error code -2 when running /init?

Posted: 14 Aug 2021 05:32 AM PDT

So I'm trying to create a linux distribution that uses as muck Suckless software as possible, and here is the /init script in the initramfs:

#!/bin/mksh    /bin/ubase-box mount -t devtmpfs  devtmpfs  /dev  /bin/ubase-box mount -t proc      proc      /proc  /bin/ubase-box mount -t sysfs     sysfs     /sys  /bin/ubase-box mount -t tmpfs     tmpfs     /tmp  /bin/mksh  

(/bin/mksh is a valid executable shell and ubase-box is bascially suckless Busybox). However, when trying to run this, the Linux kernel gives me error code -2. Help?

qdisc netem is adding too much delay

Posted: 14 Aug 2021 05:27 AM PDT

I'm running the following command in order to simulate delay + jitter on a veth pair (Mininet).

sudo tc qdisc add dev h1-eth0 root netem delay 100ms 5ms  

When only specifying a 100ms (without the 5ms in my example) delay, everything work as expected as a ping between the interfaces can shows that there's is indeed a 100ms delay.

64 bytes from 10.0.0.2: icmp_seq=17 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=18 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=19 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=20 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=21 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=22 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=23 ttl=64 time=100 ms  

However, when adding the jitter parameter, there is indeed, most of the time, a variation from 100ms to 5ms but I'm also getting a huge delay of around 4000ms

64 bytes from 10.0.0.2: icmp_seq=64 ttl=64 time=104 ms  64 bytes from 10.0.0.2: icmp_seq=65 ttl=64 time=104 ms  64 bytes from 10.0.0.2: icmp_seq=63 ttl=64 time=4391 ms  64 bytes from 10.0.0.2: icmp_seq=68 ttl=64 time=101 ms  64 bytes from 10.0.0.2: icmp_seq=70 ttl=64 time=101 ms  64 bytes from 10.0.0.2: icmp_seq=66 ttl=64 time=4393 ms  64 bytes from 10.0.0.2: icmp_seq=71 ttl=64 time=105 ms  64 bytes from 10.0.0.2: icmp_seq=67 ttl=64 time=4393 ms  64 bytes from 10.0.0.2: icmp_seq=72 ttl=64 time=103 ms  64 bytes from 10.0.0.2: icmp_seq=73 ttl=64 time=100 ms  64 bytes from 10.0.0.2: icmp_seq=69 ttl=64 time=4390 ms  64 bytes from 10.0.0.2: icmp_seq=78 ttl=64 time=102 ms  64 bytes from 10.0.0.2: icmp_seq=74 ttl=64 time=4392 ms  64 bytes from 10.0.0.2: icmp_seq=75 ttl=64 time=4393 ms  64 bytes from 10.0.0.2: icmp_seq=80 ttl=64 time=102 ms  64 bytes from 10.0.0.2: icmp_seq=76 ttl=64 time=4393 ms  64 bytes from 10.0.0.2: icmp_seq=77 ttl=64 time=4390 ms  

EDIT : I just checked if this issue also occurs on a real interface, it does.

Am I missing something or is there indeed an issue?

How to drop all unnecessary UDP traffic on INPUT chain?

Posted: 14 Aug 2021 06:31 AM PDT

About 2 weeks ago, I started running a Tor relay, which itself operates solely on TCP.

So, I would like to drop all UDP packets, which are unneeded.

But I don't know, what is actually needed (if anything).

My firewall looks like this after about 1 day of uptime - beware not to comment or answer on behalf of my TCP protection rules, I am just doing some personal research:

iptables -L -v --line-numbers    Chain INPUT (policy DROP 17862 packets, 1945K bytes)  num   pkts bytes target     prot opt in     out     source               destination           1        0     0 DROP       icmp --  any    any     anywhere             anywhere             u32 ! "0x4&0x3fff=0x0" /* ICMP fragmented packets */  2        0     0 DROP       icmp --  any    any     anywhere             anywhere             length 1492:65535 /* ICMP oversized unfragmented packets */  3        1  1500 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/NONE /* NULL scan */  4        0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN,SYN,RST,PSH,ACK,URG /* Xmas scan */  5        0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN,PSH,URG /* stealth scan */  6        0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN,SYN,RST,ACK,URG /* pscan 1 */  7        0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN/FIN,SYN /* pscan 2 */  8        0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,RST/FIN,RST /* pscan 3 */  9        2   104 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:SYN,RST/SYN,RST /* SYN-RST scan */  10       0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:ACK,URG/URG /* URG scan */  11       0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN,SYN /* SYN-FIN scan */  12       0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN,PSH,URG /* nmap Xmas scan */  13       0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN /* FIN scan */  14       0     0 DROP       tcp  --  any    any     anywhere             anywhere             tcp flags:FIN,SYN,RST,PSH,ACK,URG/FIN,SYN,PSH,URG /* nmap-id scan */  15       0     0 DROP       all  -f  any    any     anywhere             anywhere             /* fragmented packets */  16    5049 1668K DROP       all  --  any    any     anywhere             anywhere             ctstate INVALID /* invalid packets */  17    1358  795K REJECT     tcp  --  any    any     anywhere             anywhere             ctstate NEW tcp flags:!FIN,SYN,RST,ACK/SYN /* new non-syn packets */ reject-with tcp-reset  18      52  2600 ACCEPT     all  --  lo     any     anywhere             anywhere             /* loopback: compulsory */  19    2588  303K ACCEPT     icmp --  any    any     anywhere             anywhere             icmp echo-request limit: avg 2/sec burst 5 /* ICMP: ping only */  20   15482  932K ACCEPT     tcp  --  any    any     anywhere             anywhere             ctstate NEW,ESTABLISHED tcp dpt:57329 /* SSH: global obfuscated */  21     97M   54G ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:9001 /* Tor: OR */  22   54303 4010K ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:9030 /* Tor: Dir */  23     95M   93G ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED /* Tor: traffic */    Chain FORWARD (policy DROP 0 packets, 0 bytes)  num   pkts bytes target     prot opt in     out     source               destination             Chain OUTPUT (policy ACCEPT 182M packets, 160G bytes)  num   pkts bytes target     prot opt in     out     source               destination           

For me, what comes to mind, is possibly DHCP packets, but I do not know much about this, so...

I don't personally do networking, so I'd appreciate if anyone could help with this. Thanks.


Edits:

  • #1: My server is a DHCP client.

  • #2: I just changed ICMP policy to REJECT (reject-with icmp-admin-prohibited), and it did not affect my server getting a static lease (upon reboot).

Connecting to a hidden Wi-Fi network Arch Linux

Posted: 14 Aug 2021 05:57 AM PDT

I install Arch Linux 8.1. I'm trying to connect to the network via Wi-fi via the iwd package according to the manual, the problem is that my network is hidden and is not detected in the list by station wlan1 get-networks. Even if you connect directly with the correct SSID and key, the network is still not located. How can I find a hidden network through iwd, or through another tool?

Can't connect USB devices (old laptop)

Posted: 14 Aug 2021 10:18 AM PDT

Hello fellow Linux users,

I have ran into an issue. I bought a secondhand HP Pavilion dr6235nr with a Core 2 duo and Centrino network card. I'm looking to solve 3 things. I know what you're gonna say: why Kali if ur not an expert? Well, I'm not an amateur Linux user because I've been using it and learning for the past 5 years. Kali was also what I had downloaded as an iso for making virtualbox work on my Mac. I ultimately need to fix a laptop that went into a kernel panic because of "Microsoft certified brands". Thus Manjaro couldn't have WoeUSB and I needed something that could support it.

1) In the past I've tried multiple methods of connecting to wifi as I don't have a functional LAN port. the IP command didn't work, ifconfig and iw are not registered commands either. I'd like to setup a DE but can't connect to networks sure to noon free hardware, leading me to my second issue.

2) I've used 5 brand new flash drives I bought from Walmart, all of different brands (SanDisk, onn, PNY, Samsung and sabrent) all couldn't mount because they "don't exist" (they were all tried individually so I tried mounting them as sdb) I know it isn't a hardware issue because I wouldn't have been able to boot into a Kali installer. I believe there is a software issue with Debian based distributions. I know a desktop can't be setup over the internet (I'm actually trying to install GNOME 2.62 instead of MATE so I have to use a USB drive to compile it) in my configuration, but if I can fix the drives error I should be able to fix the wifi issue by downloading firmware. Could not use the root acct to do this leading me to my final question.

put-name-here@localhost~$ sudo mount /dev/sdb /mnt/usb-drive   [sudo] password for put-name-here:  mount: /mnt/usb-drive: special device /dev/sdb does not exist.  

3) the root password is unknown to me since it wasn't in the installation prompt, only my own. I know Kali's root password is usually toor but it doesn't work nor does my user password. What is Kali's root (current stable version as of 8/13/21) I had to attempt to mount the USB devices with sudo and no luck.

rsync with backup option as default

Posted: 14 Aug 2021 07:51 AM PDT

Would it make sense to use rsync with --backup option by default, or do people customarily transfer files without the option ?

I am also using the command

rsync "${oaggr[@]}" "$srcdir" "$dstdir"  

The oaggr array contains --log-file="$logfl". But I also want the usual output in the terminal being stored in an additional file besides having the logfl.

LVM root partition only uses half the volume size

Posted: 14 Aug 2021 07:56 AM PDT

I have an Ubuntu server 20.04 with an encrypted 50GB LVM root partition and I just realized the filesystem itself only shows 25GB

The install was default (apart from the encryption bit) and I don't understand why it didn't use all the space for the root partition?

How do I expand the root filesystem?

 PV                     VG        Fmt  Attr PSize  PFree    /dev/mapper/dm_crypt-0 ubuntu-vg lvm2 a--  48.48g <24.24g     VG        #PV #LV #SN Attr   VSize  VFree    ubuntu-vg   1   1   0 wz--n- 48.48g <24.24g     LV        VG        Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert    ubuntu-lv ubuntu-vg -wi-ao---- 24.24g  

Partition table on nother device

Posted: 14 Aug 2021 07:11 AM PDT

I'm curious if it's possible to store disk partition tables on different device and load it from grub

Edit: Lets say i have disk formated like this: disk with 3 partition  and ~ 12gb unallocated at end

normally post load mbr (disk 1) and starting up, system sees only 3 partition and unallocated space

but im wonder if its possible to make different scenario. Lets say I have data tables of this disk which look like this:

disk with 3 partition  and ~ 454gb unallocated at beginning and what i wants to do here is load this alternative partition tables from another drive (this drive will also had grub installed on it) instead of those which are on disk (without replacing them) and start os from it.

How can reassing a keyboard key on debian linux?

Posted: 14 Aug 2021 06:54 AM PDT

I am using kali linux, xfce desktop is installed. I want to change the functionality of the circled key on the image. Even though that key shows "" for some reason it types "<".

enter image description here

As a programmer, I use "right arrow key" very often so I want to assign the circled key to "right arrow key",because in the current layout, "right arrow key" is far to reach so it slows me down typing.

So I will have two "right arrow keys", I tried to create shortcut with keyboard settings, but it says "right is already assigned. DO you wanna use < instead". Since i don't use the circled key for "<" character, I wanna assign it to "right" to I can use it with my thump easily.

This is msi laptop, msi has steel-steer-3 app for macos and windows but not for linux

Trying to disable the touchpad at wakeup

Posted: 14 Aug 2021 09:46 AM PDT

I am using a Thinkpad L13 Yoga and had the problem that the trackpoint stopped working after wakeup. So I followed the following hint (which basically reloads the psmouse module at wakeup): https://askubuntu.com/a/1159960/270792

After putting the file in place the trackpoint stopped failing at wakeup, however, the touchpad now was activated. I would like, however, to keep the touchpad deactivated since I sometimes touch it with my palms unintentionally.

So i tried to disable the touchpad at wakeup. Here is how my /lib/systemd/system-sleep/trackpoint-fix script currently looks like:

#!/bin/bash    case $1/$2 in    pre/*)      echo "Going to $2..."      # Place your pre suspend commands here, or `exit 0` if no pre suspend action required      modprobe -r psmouse      ;;    post/*)      echo "Waking up from $2..."      # Place your post suspend (resume) commands here, or `exit 0` if no post suspend action required      sleep 2      echo "Will now modprobe psmouse..."      modprobe psmouse      sleep 2      echo "Will now disable the touchpad..."      DISPLAY=:0 xinput disable 'Elan Touchpad'      sleep 2      echo "Will now show touchpad state..."      DISPLAY=:0 xinput list-props 'Elan Touchpad' | grep 'Device Enabled'      ;;  esac  

This is what I find in my logs:

Mai 24 15:13:42 ThinkpadL13Yoga systemd-sleep[2919]: Going to suspend...  Mai 24 15:13:42 ThinkpadL13Yoga systemd-sleep[2916]: Suspending system...  Mai 24 15:13:50 ThinkpadL13Yoga systemd-sleep[2916]: System resumed.  Mai 24 15:13:50 ThinkpadL13Yoga systemd-sleep[3073]: Waking up from suspend...  Mai 24 15:13:52 ThinkpadL13Yoga systemd-sleep[3073]: Will now modprobe psmouse...  Mai 24 15:13:54 ThinkpadL13Yoga systemd-sleep[3073]: Will now disable the touchpad...  Mai 24 15:13:56 ThinkpadL13Yoga systemd-sleep[3073]: Will now show touchpad state...  Mai 24 15:13:56 ThinkpadL13Yoga systemd-sleep[3326]:         Device Enabled (184):        0  

So, looking at the last line, it seems like the touchpad device has been disabled successfully. However, the touchpad is still active. If I check the state of the touchpad inside the X session after wakeup, it tells me that the device is indeed enabled:

$ DISPLAY=:0 xinput list-props 'Elan Touchpad' | grep 'Device Enabled'      Device Enabled (184):   1  

I absolutely don't understand how the touchpad gets enabled again and would like to keep it disabled. Possibly, reloading psmouse isn't a suiting solution and there is a better approach to keep the trackpoint enabled after wakeup.

Is it possible to parse a command's arguments automatically?

Posted: 14 Aug 2021 09:18 AM PDT

I would like to modify the git clone command -- such that it use a local cache -- by creating a wrapper that does the following:

  1. If a repository doesn't exist in the cache, clone it.
  2. Copy it to the desired location.

But how do I parse the git clone command-line arguments to get the value of repository? It seems trivial; but I can't find a good solution.

It seems like this is due to a lack of structure in command-line arguments - some could be a switch, some could follow with a value etc. In git's case <repository> can be followed by an optional <directory> argument, so I can't always go by the last argument. It would have been great if CLI arguments are more structured like dictionary etc.

Is there a way to atleast specify the syntax specified in docs so that, I can get the repository argument automatically with tools like getopts?

Note: I use multiple tools -- Jenkins, Buildout etc., -- that downloads git repositories automatically, using the git command; so I thought a wrapper would be best solution.

There are some git specific solutions worth checking out like local git server, URL rewrites etc., as well.

hiDPI scaling fix for Davinci Resolve 16 in Gnome

Posted: 14 Aug 2021 08:03 AM PDT

I asked a similar question about hiDPI scaling here. This question regards fixing scaling on a specific application, the other question is about a universal hiDPI mode.

I am running ubuntu derivative Pop!_OS 19.04 in Gnome. Edit: I have found this issue arises in CentOS, Fedora, and Debian as well.

I built a .deb package for DaVinci Resolve 16 with MakeResolveDeb and it seems to work fine, but it does not scale properly.

Scaling looks similar to this

This is a known (unfixed) issue, as seen here

The solution runscaled doesn't seem to work.

Is there a known way to solve the scaling issue for this specific application?

How to negate a gitignore pattern containing spaces?

Posted: 14 Aug 2021 06:01 AM PDT

My .gitignore starts by excluding everything, and then negating everything that I want to include, to avoid lots of noise in git status:

*  !awesome/  !calibre/  […]  

However, after installing Visual Studio Code I can't seem to negate the match for the directory "Code - OSS". I've tried the following lines:

!Code - OSS/  !Code\ -\ OSS/  !'Code - OSS/'  !"Code - OSS/"  !Code*  

With all of those lines at the bottom of my .gitignore git status still doesn't list this directory as available for inclusion.


The output of git check-ignore --verbose Code\ -\ OSS with each of those lines is also strange:

.config/.gitignore:22:!Code - OSS/  Code - OSS  .config/.gitignore:22:!Code\ -\ OSS/    Code - OSS  .config/.gitignore:1:*  Code - OSS  .config/.gitignore:1:*  Code - OSS  .config/.gitignore:22:!Code*    Code - OSS  

Can systemd preserve its state across reboots?

Posted: 14 Aug 2021 09:06 AM PDT

I'm working on setting up a Linux server that will have dozens of daemons controlled by systemd. The daemons are grouped into targets so that they can be brought up and down in groups, but the system administrators can manually control individual services. I am looking for a way to preserve the state (which services are activated) through a reboot. The idea is that people debugging, testing, and developing on the server can reboot it if needed and have the system come up in the same configuration as it was before the reboot.

systemd's snapshot functionality seems ideal for this, but as far as I can tell you can't write a snapshot to disk for use later.

My initial plan was to create a symlink from multi-user.target.wants/ to a service called bootingup.service. Every target the system administrator activates would then rewrite bootingup.service.d/bootingup.conf to launch the target that was just activated. This would mean that on boot the system would activate the most recently launched target, but it wouldn't remember any services that were individually activated/deactivated.

Is there a way to make systemd remember the state of all services across a reboot?

Dual boot windows 7 and kali linux

Posted: 14 Aug 2021 10:07 AM PDT

Once I had windows 7 on a internal HDD and wanted to install backtrack 5 on another internal HDD on the same PC. When installation completed, I rebooted and found only backtrack 5. I booted a live iso of backtrack 5 operating system and executed the following commands:

sudo apt-get install software-properties-common  sudo add-apt-repository ppa:yannubuntu/boot-repair  sudo apt-get update  sudo apt-get install -y boot-repair && boot-repair  

And that solved the problem. boot-repair was installed and ran successfully and I was able to chose which operating system to launch at power up.

A few days ago, I decided to upgrade to the successor kali Linux, so I downloaded the iso, burned it to a disc and installed it to the HDD that was hosting backtrack 5. Similarly, when I rebooted I was only presented with kali without windows 7. I tried to follow the method as earlier to dual boot, but I ran into an error message:

Unable to find package boot-repair. (I don't remember it literally)  

I have searched a lot, and found a boot-repair iso, which is lubuntu with boot-repair, gparted and other programs. I burned it to disk, and launched it, and the boot-repair was up. It ran and displayed that it has repaired and the pc is ready after reboot.

When I rebooted, I was presented with black screen with error message:

No operating system found.  

Please help.

How can I see automount points in Linux?

Posted: 14 Aug 2021 06:43 AM PDT

We use autofs at work and I'm having trouble remembering some mount points. With autofs, you can only see currently or recently mounted volumes on a particular machine. How can I see the rest?

iptables drop length and TTL condition doesn't work

Posted: 14 Aug 2021 07:07 AM PDT

With iptables utility on Linux host need to create mini firewall. I need to drop all incoming connections with package length greater than 722 AND TTL greater than 22. Need exactly AND. Drop only if both conditions are TRUE.

sudo iptables -N LOGDROP  sudo iptables -A OUTPUT -m ttl --ttl-gt 22 -j LOGDROP  sudo iptables -A INPUT -m ttl --ttl-gt 22 -j LOGDROP  sudo iptables -A LOGDROP -m length --length 722:65535 -j DROP  

IP-adress of host is 10.6.7.9 with firewall.

I did 4 test from this host, trying to ping another host:

ping -s 10000 -t 250 10.6.7.10 //fail (TTL AND LENGHT are wrong)  ping -s 100 -t 200 10.6.7.10 //success (TTL is wrong)  ping -s 10 -t 10 10.6.7.10 //success (Both are right)  ping -s 10000 -t 10 10.6.7.10 // fail, BUT SHOULD BE TRUE.  

Why last ping doesn't work ,and how to fix it?

No comments:

Post a Comment