Sunday, March 21, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Moving of Big query Datasets between data locations

Posted: 21 Mar 2021 10:09 PM PDT

I need to move data of some big query data sets from one data location to another without losing any in-flight data.I understand GCP supports data transfers between data sets but this requires multiple data sets to be created . Ex: To change data location of Dataset "A" , we need to create an empty Data set "B" (in the intended data location) and copy data set "A" to "B". Now, once the copy is done , we need to delete "A" , and , again create an empty data set "A"(in the intended data location) and copy from "B" to "A" and then delete "B". is there a better way of doing it?

Route untagged vlan to a tagged vlan with nftables

Posted: 21 Mar 2021 08:51 PM PDT

I have played around a lot with nftables, but I am stuck on this problem for the entire day. I have a wifi ssid that gets tagged vlan20. This part works, and I can see that dnsmasq is assigning ip addresses from this range:

#VLAN 20  dhcp-option=VLAN20,6,192.168.1.1  dhcp-option=VLAN20,3,192.168.20.1  dhcp-range=VLAN20,192.168.20.10,192.168.20.200,255.255.255.0,60m  

I connected a spare android phone to this SSID, and it gets assigned a DHCP address from the pool. On the router, I can connect to this device, so basic connectivity is good. I opened sshd on this phone on port 50022, and I can connect to that port as well.

root@router:/etc/systemd/network# ping 192.168.20.184  PING 192.168.20.184 (192.168.20.184) 56(84) bytes of data.  64 bytes from 192.168.20.184: icmp_seq=1 ttl=64 time=192 ms  64 bytes from 192.168.20.184: icmp_seq=2 ttl=64 time=114 ms  ^C  --- 192.168.20.184 ping statistics ---  2 packets transmitted, 2 received, 0% packet loss, time 2ms  rtt min/avg/max/mdev = 114.063/152.834/191.605/38.771 ms  root@router:/etc/systemd/network#  nc -z -v 192.168.20.184 50022  192.168.20.184: inverse host lookup failed: Unknown host  (UNKNOWN) [192.168.20.184] 50022 (?) open  

How do I extend this with nftables so that all hosts in my untagged vlan (which is all my trusted computers) can connect to this vlan? My plan is to segment vlan20 so that the iot devices in there cannot reach out to my home network, but my phone and other computers can reach to any device in here. My current configs are a mess because of all the experiments I did, but I shoved this in with the hopes that this would make the vlan wide open (spoiler alert: It didn't):

define iot0_if = "vlan20@lan0"    table bridge filter {      chain input {          type filter hook input priority 0; policy drop;          vlan id 20 accept      }      chain forward {          type filter hook forward priority 0; policy drop;          iifname $iot0_if accept          oifname $iot0_if accept        }      chain output {          type filter hook output priority 200; policy accept;      }  }  

For completeness, this is my systemd-networking config:

root@router:/etc/systemd/network# cat iot0.netdev  [NetDev]  Name=vlan20  Kind=vlan    [VLAN]  Id=20  root@router:/etc/systemd/network# cat iot0.network  [Match]  Name=vlan20    [Network]  Description="VLAN 20: IOT (Unsecured, 2.4Ghz, no wan access)"  Address=192.168.20.1/24  DNS=192.168.1.1  

How to run a batch file on my websites host machine

Posted: 21 Mar 2021 08:02 PM PDT

i have a batch file on my websites host computer and i want to have a button on the website that will execute the batch file on the host so that i can start services remotely.

-Meepso

Openstack Swift fallocate_reserve option not working

Posted: 21 Mar 2021 07:42 PM PDT

I'm using "fallocate_reserve = 5368709120" as an option in account, container and object confs, to make sure 5G reserve space remains on the disks and the the disks does not get filled up.

However, while uploading larger objects, it does not limit or move over to other free disk, but keeps filling same disk after reserve to 100% , even though above reserve option of 5G is used. Can you help or guide please.

Thank you.

Unexpected behavior of number of instance when rolling update function for production deployment

Posted: 21 Mar 2021 07:20 PM PDT

We are using rolling update function for production deployment. Expected behavior of number of instance is below. 1 -> 2 -> 1 We have confirmed this expected behavior until Jan-2021. However, the behavior of number of instance is changed from Feb-2021 like below. 1 -> 0 -> 1 Could you please advise us the cause? And, we would like to realize our production deployment without stopping our production service. Please advise us the necessary procedures or settings in rolling update.

Migrate from IBM Directory Server 8 to MS Active Directory (Windows Server 2019)

Posted: 21 Mar 2021 07:19 PM PDT

We are in a project to migrate users and passwords from IBM Directory Server v8 to Microsoft Active Directory. Use of IBM Directory Integrator (LDAPSync) is referenced in official IBM documentation. I wanted to know if someone has had a similar case and what would be the recommendation to carry out the migration keeping the passwords of the users (which are encrypted from the application layer)? Thanks a lot!

Are "apache2-dev" and "libtool" dangerous leaving them installed on the system?

Posted: 21 Mar 2021 05:23 PM PDT

I have installed Apache2-Dev and Libtool on my system to compile an Apache2-Mod. I was wondering if I should remove all 94 installed packages from my system or if I can leave them on the system.

Are they in any way harmful in the wrong hands or can they be carried out by a person outside the ssh terminal?

Full package list:

Reading package lists... Done  Building dependency tree         Reading state information... Done  The following additional packages will be installed:    autoconf automake autopoint autotools-dev binutils binutils-common binutils-x86-64-linux-gnu    build-essential cpp cpp-8 debhelper dh-autoreconf dh-strip-nondeterminism dirmngr dpkg-dev dwz fakeroot    g++ g++-8 gcc gcc-8 gettext git-man gnupg gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client    gpg-wks-server gpgconf gpgsm groff-base intltool-debian libalgorithm-diff-perl libalgorithm-diff-xs-perl    libalgorithm-merge-perl libapr1-dev libaprutil1-dev libarchive-cpio-perl libarchive-zip-perl libasan5    libassuan0 libatomic1 libbinutils libc-dev-bin libc6-dev libcc1-0 libcroco3 libcurl3-gnutls libdpkg-perl    liberror-perl libexpat1-dev libfakeroot libfile-fcntllock-perl libfile-stripnondeterminism-perl    libgcc-8-dev libgomp1 libisl19 libitm1 libksba8 libldap2-dev liblsan0 libltdl-dev libltdl7    libmail-sendmail-perl libmpc3 libmpfr6 libmpx2 libnpth0 libpipeline1 libquadmath0 libsctp-dev libsctp1    libsigsegv2 libstdc++-8-dev libsys-hostname-long-perl libtsan0 libubsan1 libuchardet0 linux-libc-dev m4    make man-db manpages manpages-dev patch pinentry-curses po-debconf uuid-dev  Suggested packages:    autoconf-archive gnu-standards autoconf-doc binutils-doc cpp-doc gcc-8-locales dh-make dbus-user-session    pinentry-gnome3 tor debian-keyring g++-multilib g++-8-multilib gcc-8-doc libstdc++6-8-dbg gcc-multilib    flex bison gdb gcc-doc gcc-8-multilib libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg libasan5-dbg    liblsan0-dbg libtsan0-dbg libubsan1-dbg libmpx2-dbg libquadmath0-dbg gettext-doc libasprintf-dev    libgettextpo-dev git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb    git-cvs git-mediawiki git-svn parcimonie xloadimage scdaemon groff glibc-doc bzr libtool-doc    lksctp-tools libstdc++-8-doc gfortran | fortran95-compiler gcj-jdk m4-doc make-doc www-browser ed    diffutils-doc pinentry-doc libmail-box-perl  The following NEW packages will be installed:    apache2-dev autoconf automake autopoint autotools-dev binutils binutils-common binutils-x86-64-linux-gnu    build-essential cpp cpp-8 debhelper dh-autoreconf dh-strip-nondeterminism dirmngr dpkg-dev dwz fakeroot    g++ g++-8 gcc gcc-8 gettext git git-man gnupg gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client    gpg-wks-server gpgconf gpgsm groff-base intltool-debian libalgorithm-diff-perl libalgorithm-diff-xs-perl    libalgorithm-merge-perl libapr1-dev libaprutil1-dev libarchive-cpio-perl libarchive-zip-perl libasan5    libassuan0 libatomic1 libbinutils libc-dev-bin libc6-dev libcc1-0 libcroco3 libcurl3-gnutls libdpkg-perl    liberror-perl libexpat1-dev libfakeroot libfile-fcntllock-perl libfile-stripnondeterminism-perl    libgcc-8-dev libgomp1 libisl19 libitm1 libksba8 libldap2-dev liblsan0 libltdl-dev libltdl7    libmail-sendmail-perl libmpc3 libmpfr6 libmpx2 libnpth0 libpipeline1 libquadmath0 libsctp-dev libsctp1    libsigsegv2 libstdc++-8-dev libsys-hostname-long-perl libtool libtsan0 libubsan1 libuchardet0    linux-libc-dev m4 make man-db manpages manpages-dev patch pinentry-curses po-debconf uuid-dev  0 upgraded, 93 newly installed, 0 to remove and 4 not upgraded.  Need to get 75.9 MB of archives.  After this operation, 283 MB of additional disk space will be used.  

SPF Out of Range

Posted: 21 Mar 2021 04:51 PM PDT

I've got my domain set up to use Gmail, and I consistently get a specific error email I email a specific person.

550 SPF: 209.85.***.*** is not allowed to send mail from example.com

When I test my record using this tool: https://toolbox.googleapps.com/apps/checkmx/

I get SPF ranges, that do not include to IP address above, but do include many others.

And... I am lost. Any ideas?

Hosting VPS, should I buy a public IP range?

Posted: 21 Mar 2021 04:34 PM PDT

I'm wanting to start my own VPS providing service. I noticed that the big companies of the domain are using public IPs for the client's VPS (the client directly accesses the VPS using a public IP that is assigned to this VPS).

Is this mandatory? Because IPv4 is expensive, even a /29. Or else, is there any way to provide the simplest access to the VPS for the client (like Bastion for SSH gateway, or something like that)?

Having issue with applying CU23 onto Exchange 2013

Posted: 21 Mar 2021 10:28 PM PDT

I am new to this forum. I need to update my exchange 2013 because of the recent microsoft vulnerability and also because it needs to be updated for the migration to office 365. My exchange is 2013 and it is version 15 Build is 1395.4 . I have a strange issue where I download CU23 to the exchange server. I extract the files into a separate folder. I then run the setup.exe in the extracted folder. What happens next is puzzling. I get a message that it is being initialized and then I get the following screen that I have attached below. I am not sure why but this is what I keep getting. next is greyed out and I cant do anything with the options I am presented. Some things I tried:

-I ran it using admin credentials

-I opened it as an administrator

-I have domain admin, schema admin and exchange organization admin permissions on the user that is opening this file

-I do have the latest pre requisites as I get asked to repair or uninstall them when I try installing

This upgrade is something that needs to happen for the migration project and for security reasons. Any help would be appreciated. Thanks enter image description here

Nginx redirect URLs with dynamic parameters

Posted: 21 Mar 2021 09:55 PM PDT

Customers from the internet need to access a web server accessible only from VPN, the domain is internal.domain and I have no control over it. Internally, the main page to log in is caas.internal.domain:6643. To connect from the internet, the users log in to the URL login.external.domain:9943 pointing to the nginx that routes the traffic to caas.internal.domain:6643 through the VPN. The response traffic from caas.internal.domain is redirected to login.external.domain not problem. All internal.domain Urls in html, json, js are replaced with external.domain. The only issue is coming from the internal url below with query string parameters that it is not translated.

https://caas.internal.domain:6643/caas/propConfig/myApp.json?callback=jQuery1234&excludeApp=Network+Center&_=5678

should be replaced with

https://login.external.domain:9943/caas/propConfig/myApp.json?callback=jQuery1234&excludeApp=Network+Center&_=5678

with 1234 and 5678 dynamic.

this is my config:

    server {        listen          9943 ssl;        listen          443 ssl;        server_name     external.domain;          ssl_certificate      /etc/nginx/ssl/mytest-web.crt;        ssl_certificate_key /etc/nginx/ssl/mytest-web.key;          location / {           resolver 8.8.8.8;           proxy_set_header Accept-Encoding "";           proxy_set_header Host $host;           proxy_set_header X-Forwarded-Host $host;           proxy_set_header X-Forwarded-Server $host;           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;           proxy_set_header X-Forwarded-Proto $scheme;           client_max_body_size 1g;           proxy_pass       https://internal.domain:6643$request_uri;             proxy_redirect   https://internal.domain:6643 https://external.domain:9943;              #sub_filter_types text/css text/xml text/javascript application/json;           sub_filter_types *;           sub_filter_once off;             sub_filter "internal.domain:6643" "external.domain:9943";          }      }   

How could I get internal urls variables replace with external urls with the same variables?

Hope that makes sense

Many thanks

Incomplete axfrdns response

Posted: 21 Mar 2021 04:45 PM PDT

I'm trying to setup djbdns (tinydns, dnscache and axfrdns) in a private network. There is a secondary server running BIND that should be able to transfer the zone managed by tinydns (through axfrdns).

I get this error message in the axfrdns logs when I request a zone transfer:

axfrdns: fatal: unable to read data.cdb: format error  

But in fact, the data.cdb file is the one used by tinydns, and it has been generated by tinydns-data: no visible problem with it.

If I do the transfer request manually with dig -t AXFR my_zone.com @172.16.2.200, I get a response, but only with the first ~20 entries from the data file (in the same order: SOA, NS, MX, A, ...), ended with the following message:

;; communications error to 172.16.2.200#53: end of file  

Obviously, this is not the end of the file.

The full response (names have been slightly changed, for privacy purpose):

$ dig -t AXFR my_zone.com @172.16.2.200    ; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> -t AXFR my_zone.com @172.16.2.200  ;; global options: +cmd  my_zone.com.           2560    IN      SOA     srv-10.my_zone.com. hostmaster.my_zone.com. 1603886087 16384 2048 1048576 2560  my_zone.com.           259200  IN      NS      srv-10.my_zone.com.  srv-10.my_zone.com.    259200  IN      A       172.16.2.199  my_zone.com.           259200  IN      NS      vm01.my_zone.com.  vm01.my_zone.com.      259200  IN      A       172.16.2.201  my_zone.com.           259200  IN      NS      vm02.my_zone.com.  vm02.my_zone.com.      259200  IN      A       172.16.2.202  my_zone.com.           86400   IN      MX      0 my_zone.mail.protection.outlook.com.  aaa.my_zone.com.       86400   IN      A       78.40.125.202  bbb.my_zone.com.       86400   IN      A       193.186.8.180  ccc.my_zone.com.       86400   IN      A       78.40.125.106  ddd.my_zone.com.       86400   IN      A       78.40.125.202  eee.my_zone.com.       86400   IN      A       193.186.8.34  fff.my_zone.com.       86400   IN      A       35.204.32.245  ggg.my_zone.com.       86400   IN      A       192.168.10.81  hhh.my_zone.com.       86400   IN      A       13.69.156.71  iii.my_zone.com.       86400   IN      A       13.69.156.71  jjj.my_zone.com.       86400   IN      A       13.69.156.71  my_zone.com.           86400   IN      A       172.16.2.172  lll.my_zone.com.       86400   IN      A       172.16.2.186  mmm.my_zone.com.       86400   IN      A       172.16.2.187  nnn.my_zone.com.       86400   IN      A       172.16.2.170  ooo.my_zone.com.       86400   IN      A       172.16.2.171  ;; communications error to 172.16.2.200#53: end of file  

I tcpdumped the communication between the axfrdns client (172.16.2.170) and the server (172.16.2.200): I can't see a clear problem in it, except at some point, the server closes the connection (packet #22).

tcpdump

I thought it could be something in the data file, but switching the entries where it stops responding (in the above example, ooo.my_zone.com) and the one after (say ppp.my_zone.com) doesn't help. It now stops at ppp.my_zone.com (i.e. at the same position, not the same entry).

Each query response packet in the tcpdump contains multiple DNS response.

How can I transfer my whole zone (~340 entries) with axfrdns? Or how to debug further?

Difference between pip install cudatoolkit=9.0 and download install from CUDA website

Posted: 21 Mar 2021 10:03 PM PDT

I have a simple confusion, what is difference between pip install cudatoolkit=9.0 and download the run file from https://developer.nvidia.com/cuda-90-download-archive and install cuda 9.0. Anyone can help?

Nginx Sorry, the page you are looking for is currently unavailable. Please try again later

Posted: 21 Mar 2021 05:05 PM PDT

I have a nginx server (1.10) and I am trying to access a certain location block for letsecnrypt but having no luck. I have modsecurity and pagespeed both included in my compiled nginx.

OS is Ubuntu 16.04 LTS

My error message I get is `An error occurred.  Sorry, the page you are looking for is currently unavailable.  Please try again later.    If you are the system administrator of this resource then you should check the error log for details.    Faithfully yours, nginx.  

Here is my nginx location block:

location ~* /\.well-known {    root /work-disk/www;    try_files $uri $uri/ @other_node; #@other_node is another location block to handle passing requests to my second server that I can no remove    # access_log off;    pagespeed off;    ModSecurityEnabled off;  }  

I have so far checked /var/log/nginx/error.log and /var/log/messages both of which show nothing pointing to anything. The former shows logs about the process and the latter shows a blank file.

EDIT**

Just gave allow all; a try and same error.

How to debug why I'm loosing connection to my Docker container when VPN is up?

Posted: 21 Mar 2021 07:06 PM PDT

I'm using Docker 18.06.1-ce on Ubuntu 16, and I have a container exposing 8012 port. When I do

curl "http://localhost:8012/"  

I get response

Error: Requested URL / not found  

But when VPN is up (openfortivpn) after long time waiting I get

curl: (56) Recv failure: Connection reset by peer  

When I mess my port number, I instantly get error:

curl "http://localhost:8011/"  curl: (7) Failed to connect to localhost port 8011: Connection refused  

How to debug and fix this? Obviously localhost pings, and traceroute to it is one hop.

postfix-Postfwd rate limit

Posted: 21 Mar 2021 06:00 PM PDT

I have total 4000 mail for an outgoing recipient and I want to send mail 2 mail every 1 minute. so I have configured postfwd.

#/etc/postfwd.cf  id=RATE01 ; action=rate($$recipient_domain/1/60/450 4.7.1 Message delivery request rate limit exceeded)  

and

smtpd_recipient_restrictions =                        permit_mynetworks,                  check_policy_service inet:127.0.0.1:10040,                  permit_sasl_authenticated,                  127.0.0.1:10040_time_limit = 3600,                  reject_unauth_destination  

after 2 mails all the mails rejected and mailq empty. my requirment is send 2 mail and hold the mail for 1 minutes and send mail again.

Enable and disable scheduled task with PowerShell and variable

Posted: 21 Mar 2021 04:01 PM PDT

if I try to enable and disable a scheduled task with PowerShell this works.

Enable-ScheduledTask -TaskName "Name of my task"  Disable-ScheduledTask -TaskName "Name of my task"  

But if I'm trying this with a variable I'll got an error.

$TaskName = "Name of my task"    Enable-ScheduledTask -TaskName "${TaskName}"  Disable-ScheduledTask -TaskName "${TaskName}"  

How could I do this with the variable?

Edit:

This error occurs.

Enable-ScheduledTask : Das System kann die angegebene Datei nicht finden. (The system could not found the file)  + Enable-ScheduledTask -TaskName "${TaskName}"  + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : ObjectNotFound: (PS_ScheduledTask:Root/Microsoft/...S_ScheduledTask) [Enable-ScheduledTask], CimException      + FullyQualifiedErrorId : HRESULT 0x80070002,Enable-ScheduledTask  

Data Transfer Pauses on LSI 9271 RAID Controller

Posted: 21 Mar 2021 05:07 PM PDT

I have a server equipped with a LSI 9271-8i RAID controller, with 4 x 4TB organized as RAID-5 and 1 x 8TB as JBOD (which is called RAID-0 in the controller).

When I copy bigger amounts of data (~1 TB), I can observe the following: for the first few gigabytes the transfer speed is fine and limited by the disk or network speeds, usually ~100MB/s. But after a while, the transfer completely pauses for approx. 20-30 seconds, and continues then with the next approx. 1 GB. I copy a lot of files with each between 10MB and 500MB, and during the pause robocopy stays at a file and continues to the next after the pause. That way the overall transfer rate drops to ~20MB/s.

During the pause, browsing the drives' files is not possible, and in one case I received an controller reset error message ("Controller encountered a fatal error and was reset"). Also accessing controller data with the CLI tool is not possible during that pause (the result is displayed when the pause is over).

I could observe this behaviour when copying

  • gigabit network to RAID-5 volume
  • gigabit network to JBOD volume
  • JBOD to RAID-5
  • RAID-5 to JBOD

There is nothing going on that looks suspicious to me: temperatures (disks, BBU) are within the valid range, controller temp seems a bit high, but also within specs. No checks are running on the RAID, no rebuild in progress.

Any guesses?

Before I replace the controller, I want to try optimizing the thermal situation. Does this behaviour sound like a possibly thermal issue?

I find it strange that the first 20-30 GB are working fine, and the pauses are not ocurring before that. If I leave the server alone for a while and retry, then again a few GBs are copied fine. The only naive explanation for me is that the controller gets too hot. Why the controller and not the disks? The RAID-5 disks are 7200rpm and stacked very closely, while the JBOD single disk is 5400rpm and with a lot of air around. Would be strange if both would show the same overheating symptoms.

"renaming" a drive in a ZFS pool?

Posted: 21 Mar 2021 08:05 PM PDT

So I'm playing around with ZFS on what will eventually be a file server, and I managed to get myself into this state:

  pool: storage   state: DEGRADED  status: One or more devices has experienced an unrecoverable error.  An          attempt was made to correct the error.  Applications are unaffected.  action: Determine if the device needs to be replaced, and clear the errors          using 'zpool clear' or replace the device with 'zpool replace'.     see: http://zfsonlinux.org/msg/ZFS-8000-9P    scan: scrub in progress since Wed Oct 18 17:55:08 2017      272G scanned out of 665G at 512M/s, 0h13m to go      8K repaired, 40.92% done  config:            NAME                                      STATE     READ WRITE CKSUM          storage                                   DEGRADED     0     0     0            raidz2-0                                DEGRADED     0     0     0              ata-Hitachi_HUA723020ALA641_YFH0ZXHA  ONLINE       0     0     0              ata-Hitachi_HUA723020ALA641_YFH2808B  ONLINE       0     0     0              ata-Hitachi_HUA723020ALA641_YGGLAUJA  ONLINE       0     0     0              ata-Hitachi_HUA723020ALA641_YGGN6B7A  ONLINE       0     0     0              sdf                                   OFFLINE      0     0     2  (repairing)    errors: No known data errors  

I did this by building a 5 drive raidz2 pool, then intentionally corrupting one of the disks, then somehow removing/replacing it. But I think at some point I added the disk using the sdX label instead of what I wanted, which is the "by-id" label.

The array works in this state, but if it were production I'd much rather have the naming consistent, and at some level I'm really just trying to understand it better.

What are possible ways to "rename" this last device? (Other than completely destroying the pool!)

Express.js fails to serve all static files on heroku

Posted: 21 Mar 2021 10:03 PM PDT

there, I have a little problem with node.js and express on heroku. I have a node.js server that loads up my index.html page and other resources:

process.env.PWD = process.cwd();  var express = require("express");  var fs = require("fs-extra");  var app = express();  app.use(express.static(process.env.PWD+"/Anos"));  app.use(process.env.PWD +"/Applications", express.static(process.env.PWD + '/Anos/Applications'));    app.get(process.env.PWD+"/Anos",function(req,res) {  });  console.log("hello bitche");  var server = app.listen(process.env.PORT || 8080,function() {      console.log("Project Alexander listening on port 8080!");  });  global.io = require("socket.io").listen(server);  var login = require("./local_modules/login.js");  var saveFiles = require("./local_modules/fileFolderSave.js");  global.io.sockets.on("connection",function(client) {      login(client,global.io,fs,process.env.PWD);      saveFiles(client,global.io,fs,process.env.PWD);  });  

This works grand on localhost, but when I deploy it to heroku from dropbox it only loads certain things like css and js files, all of which are in sub directories. The things that aren't loaded are the icon.png files from the folders in the applications folder. Dir structure:

Anos/    index.html    css/       Taskbar/       Desktop/       Infopane/    js/       Taskbar/       Desktop/       Infopane/    Applications/       App1/          icon.png       App2/ etc..          icon.png    userData/    users/    images/  

The app here) (type

an

and click login and check the console for 404 errors. Let me know if you need anything else. Any ideas? What can i change to get rid of these errors and load everything properly? Thanks in advance!

ERRORS: image

Exim can't send email (unable to set gid or uid forcing real = effective)

Posted: 21 Mar 2021 08:05 PM PDT

I've a debian Jessie and Exim4. I can't send email from php, error given is: unable to set gid=33 or uid=0 (euid=0): forcing real = effective

I've tried to search for this problem but nothing helps me.

One solution found was to add trusted_user in exim configuration, but not works either

Someone can help me?

Hardware/BIOS - IBM x3200 m2

Posted: 21 Mar 2021 04:01 PM PDT

When I turn on my machine, the display is not throwing any output to it.

The fans are running fine.

The LED lights indicate no error, ' Only two LED's are blinking "Mini-BMCheartbeat LED" and "Standby power LED" The article mentions about the error LED's

http://www.manualslib.com/manual/419746/Ibm-436854u-System-X3200-M2-5u-Mini-Tower-Server.html?page=170

There are three jumpers(jp1,jp3,jp6) which are in the default locations

http://www.manualslib.com/manual/419746/Ibm-436854u-System-X3200-M2-5u-Mini-Tower-Server.html?page=31

Further analysis showed me that, I have blowed the Motherboard BIOS, it needs to be re-flashed with new bios

Could anyone please assist me in fixing this and let me know if more information is required.

Thanks, Teja

Extract portion of text file between blank lines via batch

Posted: 21 Mar 2021 09:02 PM PDT

I need to be able to extract a portion of a text file that occurs between two blank lines. The text file looks something like this...

This is line 01 of the text file.  This is line 02 of the text file.  This is line 03 of the text file.    This is line 05 of the text file.  This is line 06 of the text file.  This is line 07 of the text file.       > VALUE TO SEARCH <  This is line 09 of the text file.  This is line 10 of the text file.    This is line 12 of the text file.  This is line 13 of the text file.  

So, I can search and locate the "> VALUE TO SEARCH <" within the text file, but then I need to be able to grab everything up to the preceding blank line and everything down to the trailing blank line of that one section. Does that make sense? Anyway, the number of lines per section varies, but there is always a single blank line between sections.

Can this be done via batch file? If so, how?

Thanks in advance!

mysqltuner script to optimize my mysql database

Posted: 21 Mar 2021 05:05 PM PDT

I have a Dedicated Server running an php eshop.

The server is built on Debian 7.8 64Bit, and has 12 cores and 64GB RAM. I try to use the mysqltuner script to optimize my mysql database.

The current query_cache_size is set to 256MB. I am not sure I should continue increase this number as suggested or not.

Also, A) could you tell me why the Query cache prunes per day(111209) is so high? How can I to reduce this? B) why the Table cache hit rate is only 56% How can i achieve better cache hit rate?

MySQLTuner 1.4.0 - Major Hayden

Bug reports, feature requests, and downloads at http://mysqltuner.com/

Run with '--help' for additional options and output filtering

[OK] Logged in using credentials from debian maintenance account.

[OK] Currently running supported MySQL version 5.5.43-0+deb7u1-log

[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------

[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM

[--] Data in MyISAM tables: 34M (Tables: 157)

[--] Data in InnoDB tables: 422M (Tables: 4)

[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)

[!!] Total fragmented tables: 14

-------- Security Recommendations -------------------------------------------

[OK] All database users have passwords assigned

-------- Performance Metrics ------------------------------------------------- [--] Up for: 12h 49m 55s (2M q [59.876 qps], 138K conn, TX: 6B, RX: 476M)

[--] Reads / Writes: 76% / 24%

[--] Total buffers: 32.3G global + 2.8M per thread (600 max threads)

[OK] Maximum possible memory usage: 33.9G (53% of installed RAM)

[OK] Slow queries: 0% (0/2M)

[OK] Highest usage of available connections: 1% (9/600)

[OK] Key buffer size / total MyISAM indexes: 16.0M/36.1M

[OK] Key buffer hit rate: 100.0% (1B cached / 17K reads)

[OK] Query cache efficiency: 70.2% (1M cached / 2M selects)

[!!] Query cache prunes per day: 111209

[OK] Sorts requiring temporary tables: 0% (223 temp sorts / 70K sorts)

[!!] Joins performed without indexes: 5298

[OK] Temporary tables created on disk: 18% (13K on disk / 72K total)

[OK] Thread cache hit rate: 99% (9 created / 138K connections)

[OK] Table cache hit rate: 56% (239 open / 423 opened)

[OK] Open file limit used: 0% (403/1M)

[OK] Table locks acquired immediately: 99% (1M immediate / 1M locks)

[OK] InnoDB buffer pool / data size: 32.0G/422.7M

[OK] InnoDB log waits: 0

-------- Recommendations -----------------------------------------------------

General recommendations:

Run OPTIMIZE TABLE to defragment tables for better performance    MySQL started within last 24 hours - recommendations may be inaccurate    Increasing the query_cache size over 128M may reduce performance    Adjust your join queries to always utilize indexes  

Variables to adjust:

query_cache_size (> 256M) [see warning above]    join_buffer_size (> 256.0K, or always use indexes with joins)  

SSH Client sporadically hangs for few seconds

Posted: 21 Mar 2021 07:06 PM PDT

On Ubuntu 14.04 LTS when I use SSH client, it hangs sporadically for few seconds every minute or so. no input is lost is just delayed.

SSH -V

OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014  

.ssh/config

Host *    ServerAliveInterval 240  

/etc/ssh/ssh_config

Host *  SendEnv LANG LC_*  HashKnownHosts yes  GSSAPIAuthentication yes  GSSAPIDelegateCredentials no  

Same machine has Ubuntu 12.04 LTS with the same configuration, works fine. other machine works fine with the same server as well.

SSH -V

OpenSSH_5.9p1 Debian-5ubuntu1.4, OpenSSL 1.0.1f 6 Jan 2014  

How do I change the root path in nginx

Posted: 21 Mar 2021 08:00 PM PDT

I have installed nginx on my Ubunut 12.04, I am not able to change the root directory for the nginx,

This is the default nginx configuration file:

server {          listen 80 default_server;          listen [::]:80 default_server ipv6only=on;            #root /usr/share/nginx/html;          root /var/www          index index.html index.htm;            server_name localhost;            location / {                  try_files $uri $uri/ =404;          }  }  

I tried changing root to /var/www/ but some how it doesn't want to change. I tried restarting the nginx but I am not able to change the root directory.

SQL Server Failover Cluster on Azure (HA Strategy)

Posted: 21 Mar 2021 06:00 PM PDT

In the context of SQL Server 2012 Standard Edition, is it possible to make Azure Blob storage available to a SQL Failover cluster directly as a shared SCSI resource on a Windows Cluster? An Azure blob is effectively shared storage, and a newly added disk on an Azure VM, does show up as a SCSI disk, before it is converted into a Volume (GPT).

Does SQL Server support writing to such a shared blob storage, directly, from two clustered instances?

Spamassassin filter mail with attachment

Posted: 21 Mar 2021 09:02 PM PDT

I would like to add a spamassassin rule based on if a message coming from a particular sender has an attachment or not. Is there a way to filter messages from a certain sender so that mails without attachment get a low score but mails with attachments get labeled as spam ?

How to undo DROP Table?

Posted: 21 Mar 2021 06:22 PM PDT

I accidentally dropped all tables. Can I restore back? I don't have the backup copy.

No comments:

Post a Comment