Sunday, May 23, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


What would be the best approach to upgrade docker from v1.13.1 to v19.03.11?

Posted: 23 May 2021 10:44 PM PDT

We use docker as a container runtime in kubernetes. Currently we are on K8s v1.19.7 but still using older version of docker v1.13.1, somehow we didn't upgrade docker with kubernetes. Now I am in a kind of situation where I have to upgrade to docker-ce v19.03.11, which is listed as a dependency with K8s v1.19.7.

Can anyone help me and suggest a good approach for it? Can I directly upgrade to v19.03.11 or follow a certain path?

Yes, I will be doing a POC first then implementing it in our actual env.

Centos 7, HA postgresql12, patroni with etcd v3.4

Posted: 23 May 2021 10:37 PM PDT

I followed this document but dont know how to enable v2 so that patroni can work with, can anyone help? https://computingforgeeks.com/setup-etcd-cluster-on-centos-debian-ubuntu/

Error message BAD_GATEWAY on app engine

Posted: 23 May 2021 09:21 PM PDT

I'm receiving this 502 and bad_gateway error on my app engine ,Can anyone resolve my issue and indicate what am I doing wrong?

How to add Jenkins agent to Jenkins master via docker- compose for CI/CD

Posted: 23 May 2021 08:50 PM PDT

I'm newby in DevOps and trying to build CI/CD deployment using jenkins. But I totally stuck when started trying to write my own docker-compose file.

My goal is: build and test an app (from github) on Jenkins agent using pipeline. I use docker-compose to build and run master and agent docker images. As I know: to add a new node to Jenkins, ssh key is required, but is there a way to add ssh key via a compose file (in addition: during the generating ssh key passphrase is also required)? Or it possibles if Jenkins GUI use?

Also, is it possible to add Jenkins plugins (such as SSH Agent) via docker-compose to connect agent to master?

#docker-compose.yaml  ---  version: "3.9"  services:    master:      image: jenkins/jenkins:lts      container_name: jenkins_master      user: core      ports:        - 8080:8080        - 50000:50000      volumes:        - jenkinsdata:/var/jenkins_home/      networks:        - network      restart: always    agent:      image: jenkins/ssh-agent:latest      container_name: jenkins_agent      env_file:        - jenkins_agent.env      depends_on:        - master      ports:        - 22:22      networks:        - network      restart: always    #  nexus:  #    image: sonatype/nexus3  #    container_name: nexus3  #  depends_on:  #    - agent  #    - master  #    ports:  #      - 8081:8081  #    networks:  #      - network  #    restart: always    volumes:    jenkinsdata:      external: true      networks:    network:      driver: bridge    
#jenkins_agent.env  JENKINS_MASTER="http://localhost:8080"  JENKINS_NAME="agent"  JENKINS_USER=jenkins  JENKINS_PASS=jenkins  

Exception Value: relation "django_session" does not exist

Posted: 23 May 2021 08:41 PM PDT

I found a Django project and failed to get it running in Docker container in the following way:

  1. git clone git clone https://github.com/NAL-i5K/django-blast.git
  2. $ cat requirements.txt in this files the below dependencies had to be updated:
    • psycopg2==2.8.6

I have the following Dockerfile:

FROM python:2  ENV PYTHONUNBUFFERED=1  RUN apt-get update && apt-get install -y postgresql-client  WORKDIR /code  COPY requirements.txt /code/  RUN pip install -r requirements.txt  COPY . /code/  RUN mkdir -p /var/log/django  RUN mkdir -p /var/log/i5k    

For docker-compose.yml I use:

version: "3"    services:    db:      image: postgres      volumes:        - ./data/db:/var/lib/postgresql/data        - ./scripts/install-extensions.sql:/docker-entrypoint-initdb.d/install-extensions.sql        environment:        - POSTGRES_DB=postgres        - POSTGRES_USER=postgres        - POSTGRES_PASSWORD=postgres      web:      build: .      command: python manage.py runserver 0.0.0.0:8000      volumes:        - .:/code      ports:        - "8000:8000"      depends_on:        - db      links:        - db      
$ cat scripts/install-extensions.sql   CREATE EXTENSION hstore;  

I had to change:

$ vim i5k/settings.py  DATABASES = {      'default': {          'ENGINE': 'django.db.backends.postgresql_psycopg2',          'NAME': 'postgres',          'USER': 'postgres',          'PASSWORD': 'postgres',          'HOST': 'db',          'PORT': '5432',          }  }  

Next, I ran docker-compose up --build and opened in Browser http://localhost:8000/admin/ which caused:

Environment:      Request Method: GET  Request URL: http://localhost:8000/admin/    Django Version: 1.8.12  Python Version: 2.7.18  Installed Applications:  ('django.contrib.auth',   'django.contrib.contenttypes',   'django.contrib.sessions',   'django.contrib.sites',   'django.contrib.messages',   'django.contrib.staticfiles',   'django.contrib.postgres',   'axes',   'rest_framework',   'rest_framework_swagger',   'pipeline',   'app',   'blast',   'migrate_account',   'suit',   'filebrowser',   'django.contrib.admin',   'django.contrib.admindocs',   'social.apps.django_app.default',   'captcha',   'dashboard',   'proxy',   'hmmer',   'clustal',   'webapollo_sso',   'drupal_sso')  Installed Middleware:  ('django.middleware.common.CommonMiddleware',   'django.contrib.sessions.middleware.SessionMiddleware',   'django.middleware.csrf.CsrfViewMiddleware',   'django.contrib.auth.middleware.AuthenticationMiddleware',   'django.contrib.messages.middleware.MessageMiddleware',   'axes.middleware.FailedLoginMiddleware',   'app.middleware.SocialAuthExceptionMiddleware')      Traceback:  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response    132.                     response = wrapped_callback(request, *callback_args, **callback_kwargs)  File "/usr/local/lib/python2.7/site-packages/django/contrib/admin/sites.py" in wrapper    254.                 return self.admin_view(view, cacheable)(*args, **kwargs)  File "/usr/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view    110.                     response = view_func(request, *args, **kwargs)  File "/usr/local/lib/python2.7/site-packages/django/views/decorators/cache.py" in _wrapped_view_func    57.         response = view_func(request, *args, **kwargs)  File "/usr/local/lib/python2.7/site-packages/django/contrib/admin/sites.py" in inner    222.             if not self.has_permission(request):  File "/usr/local/lib/python2.7/site-packages/django/contrib/admin/sites.py" in has_permission    162.         return request.user.is_active and request.user.is_staff  File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py" in inner    225.             self._setup()  File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py" in _setup    376.         self._wrapped = self._setupfunc()  File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/middleware.py" in <lambda>    22.         request.user = SimpleLazyObject(lambda: get_user(request))  File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/middleware.py" in get_user    10.         request._cached_user = auth.get_user(request)  File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py" in get_user    167.         user_id = _get_user_session_key(request)  File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py" in _get_user_session_key    59.     return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])  File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py" in __getitem__    48.         return self._session[key]  File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py" in _get_session    181.                 self._session_cache = self.load()  File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/db.py" in load    21.                 expire_date__gt=timezone.now()  File "/usr/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method    127.                 return getattr(self.get_queryset(), name)(*args, **kwargs)  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py" in get    328.         num = len(clone)  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py" in __len__    144.         self._fetch_all()  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py" in _fetch_all    965.             self._result_cache = list(self.iterator())  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py" in iterator    238.         results = compiler.execute_sql()  File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in execute_sql    840.             cursor.execute(sql, params)  File "/usr/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute    79.             return super(CursorDebugWrapper, self).execute(sql, params)  File "/usr/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute    64.                 return self.cursor.execute(sql, params)  File "/usr/local/lib/python2.7/site-packages/django/db/utils.py" in __exit__    98.                 six.reraise(dj_exc_type, dj_exc_value, traceback)  File "/usr/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute    64.                 return self.cursor.execute(sql, params)    Exception Type: ProgrammingError at /admin/  Exception Value: relation "django_session" does not exist  LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...  

What did I miss?

Thank you in advance

Apache HTTP Server is running without no executable! How is this possible?

Posted: 23 May 2021 08:08 PM PDT

When I logged in to an EC2 instance created by someone else to host Apache HTTP Server I cannot run any of common Apache commands but the Apache is running:

$ ps -aux | grep apache  jimble   22250  0.0  0.0  12944   864 pts/0    S+   02:28   0:00 grep --color=auto apache  

How is this possible? Where is the Apache executable?

$ apache2  The program 'apache2' is currently not installed. You can install it by typing:  sudo apt install apache2-bin    $ httpd  No command 'httpd' found, did you mean:   Command 'http' from package 'httpie' (universe)   Command 'xttpd' from package 'xtide' (universe)  httpd: command not found    $ cat /etc/os-release  NAME="Ubuntu"  VERSION="16.04.4 LTS (Xenial Xerus)"  ID=ubuntu  ID_LIKE=debian  PRETTY_NAME="Ubuntu 16.04.4 LTS"  VERSION_ID="16.04"  HOME_URL="http://www.ubuntu.com/"  SUPPORT_URL="http://help.ubuntu.com/"  BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"  VERSION_CODENAME=xenial  UBUNTU_CODENAME=xenial    

How to secure rsyslog logging into MySQL

Posted: 23 May 2021 10:14 PM PDT

I'm looking for a way to encrypt the traffic between our hosts and the logging host in our Debian universe. rsyslog uses the ommysql module and the server is already configured to accept the users' requests only by SSL (GRANT USAGE ON *.* TO testssl@loghost REQUIRE SSL;).

I already tried to create a my.cnf for rsyslog. I provide it to the ommysql module via parameter MySQLConfig.File=... in the /etc/rsyslog.d/mysql.conf file.

The content of the my.cnf:

[client]    ssl = 1  

(I first tried with ssl-mode=REQUIRED, but that failed completely; obviously my current debian buster still doesn't support this option).

Is there anything else I don't see?

How to connect to a ESXi server, for list of VM by command line with vmware.exe?

Posted: 23 May 2021 08:53 PM PDT

I know I can connect directly to a ESXi VM from command line from vmware.exe and vmplayer.exe,with -H HOST -U "root" -P "P@55W0RD" "[datastore1] VM_001\vm001.vmx"

but not to a Server; as, in "VMWare Workstation" with "connect to server" (File/Connect_to_server; CTRL+L), it will ask for address and credentials.

Q. How can I pass the point-n-clicks or ctrl+L and connect just from a command line? screenshot

Need help to block access ODBC Data Source Administrator for normal users in Windows AD via Group Policy

Posted: 23 May 2021 05:41 PM PDT

Users should not add/modify DSN in ODBC Data Source Administrator in Windows client so we need to block access ODBC Data Source Administrator for normal users in Windows AD via Group Policy.

Please share your opinion for the same.

Got django.db.utils.OperationalError: could not connect to server: Connection refused

Posted: 23 May 2021 04:52 PM PDT

I found a Django project and failed to get it running in Docker container in the following way:

  1. git clone git clone https://github.com/NAL-i5K/django-blast.git
  2. $ cat requirements.txt in this files the below dependencies had to be updated:
    • psycopg2==2.8.6

I have the following Dockerfile:

FROM python:2  ENV PYTHONUNBUFFERED=1  RUN apt-get update && apt-get install -y postgresql-client  WORKDIR /code  COPY requirements.txt /code/  RUN pip install -r requirements.txt  COPY . /code/  RUN mkdir -p /var/log/django  RUN mkdir -p /var/log/i5k    

For docker-compose.yml I use:

version: "3"    services:    db:      image: postgres      volumes:        - ./data/db:/var/lib/postgresql/data        - ./scripts/install-extensions.sql:/docker-entrypoint-initdb.d/install-extensions.sql        environment:        - POSTGRES_DB=postgres        - POSTGRES_USER=postgres        - POSTGRES_PASSWORD=postgres      web:      build: .      command: python manage.py runserver 0.0.0.0:8000      volumes:        - .:/code      ports:        - "8000:8000"      depends_on:        - db      links:        - db      
$ cat scripts/install-extensions.sql   CREATE EXTENSION hstore;  

I had to change:

$ vim i5k/settings_prod.py  DATABASES = {      'default': {          'ENGINE': 'django.db.backends.postgresql_psycopg2',          'NAME': 'postgres',          'USER': 'postgres',          'PASSWORD': 'postgres',          'HOST': 'db',          'PORT': '5432',          }  }  

Next, I ran docker-compose up --build

web_1  | Performing system checks...  web_1  |   web_1  | System check identified no issues (0 silenced).  web_1  | Unhandled exception in thread started by <function wrapper at 0x7f8a9733a6d0>  web_1  | Traceback (most recent call last):  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 229, in wrapper  web_1  |     fn(*args, **kwargs)  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 116, in inner_run  web_1  |     self.check_migrations()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 168, in check_migrations  web_1  |     executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 19, in __init__  web_1  |     self.loader = MigrationLoader(self.connection)  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 47, in __init__  web_1  |     self.build_graph()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 191, in build_graph  web_1  |     self.applied_migrations = recorder.applied_migrations()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 59, in applied_migrations  web_1  |     self.ensure_schema()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 49, in ensure_schema  web_1  |     if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 162, in cursor  web_1  |     cursor = self.make_debug_cursor(self._cursor())  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 135, in _cursor  web_1  |     self.ensure_connection()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection  web_1  |     self.connect()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 98, in __exit__  web_1  |     six.reraise(dj_exc_type, dj_exc_value, traceback)  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection  web_1  |     self.connect()  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 119, in connect  web_1  |     self.connection = self.get_new_connection(conn_params)  web_1  |   File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection  web_1  |     connection = Database.connect(**conn_params)  web_1  |   File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 127, in connect  web_1  |     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)  web_1  | django.db.utils.OperationalError: could not connect to server: Connection refused  web_1  |    Is the server running on host "localhost" (127.0.0.1) and accepting  web_1  |    TCP/IP connections on port 5432?  web_1  | could not connect to server: Cannot assign requested address  web_1  |    Is the server running on host "localhost" (::1) and accepting  web_1  |    TCP/IP connections on port 5432?    

What did I miss?

Thank you in advance

Requests to WAN IP are served by LAN interface on OpenWrt

Posted: 23 May 2021 03:44 PM PDT

Here's the situation. I have OpenWrt installation with multiple zones:

  • WAN - let it be 1.1.1.1
  • LAN0 - 192.168.0.0/24
  • LAN1 - 192.168.1.0/24

After forwarding some port from WAN to LAN0 I can reach it from another host in the Internet (e.g. 2.2.2.2) but I can't reach port from LAN1.
After some researchments I discovered that for some reason any packets I send from LAN0/1 to 1.1.1.1 are being served from correspoding LAN interface, but not WAN e.g. when I'm trying to connect to 1.1.1.1:80 from LAN1 the packets not being forwarded to LAN0 but to router (it opens its web interface).

Yeah, I could make duplicate for every forwarding rule but I really want to try to avoid it as there are already 10 of them.
Is there any way to properly configure firewall (maybe raw iptables but w/o DNAT that only accepts one interface as an argument?) or to make packets from LAN0/1 to public IP being recieved by WAN interface?

Thank you in advance

IP Configuration for a 10G Direct Link via NIC?

Posted: 23 May 2021 03:57 PM PDT

I'm interested in setting up my 10G NIC, but am not sure 100% how to (OS: Manjaro KDE).

The driver is detected:

[manjaro manjaro]# inxi -n  Network:   Device-1: MYRICOM Myri-10G Dual-Protocol NIC driver: myri10ge              IF: enp6s0 state: down mac: 00:60:dd:45:7c:7c              Device-2: MYRICOM Myri-10G Dual-Protocol NIC driver: myri10ge  

, but no IP address is assigned:

[manjaro manjaro]# ip a  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host          valid_lft forever preferred_lft forever  2: enp10s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000      link/ether 24:4b:fe:df:6c:28 brd ff:ff:ff:ff:ff:ff      inet 192.168.1.223/24 brd 192.168.1.255 scope global noprefixroute enp10s0         valid_lft forever preferred_lft forever      inet6 fe80::c010:ba07:bfc1:8235/64 scope link noprefixroute          valid_lft forever preferred_lft forever  3: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN group default qlen 1000      link/ether 00:60:dd:45:7c:7c brd ff:ff:ff:ff:ff:ff  4: enp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq state DOWN group default qlen 1000      link/ether 00:60:dd:45:7c:7d brd ff:ff:ff:ff:ff:ff  5: wlp9s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000      link/ether f6:8a:64:4a:5b:29 brd ff:ff:ff:ff:ff:ff permaddr 34:cf:f6:e3:e3:4c  

How can I assign an IP address and complete the direct 10G connection between one computer and the other?

Should I use the ip utility or netplan, and can you show how you would configure it?

Strangely enough, the myri10ge driver package can't be found on the other Manjaro computer though.

System Specs:

System:    Host: DawnSkyFoundry Kernel: 5.10.36-2-MANJARO x86_64 bits: 64 compiler: gcc v: 10.2.0              Desktop: N/A Distro: Manjaro Linux base: Arch Linux   Machine:   Type: Server System: Dell product: PowerEdge R7425 v: N/A serial: <superuser required>              Mobo: Dell model: 02MJ3T v: X30 serial: <superuser required> UEFI: Dell v: 1.15.0              date: 09/11/2020   CPU:       Info: 2x 32-Core (4-Die) model: AMD EPYC 7601 bits: 64 type: MT MCP MCM SMP arch: Zen              rev: 2 cache: L2: 32 MiB              flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm bogomips: 560967              Speed: 1198 MHz min/max: 1200/2200 MHz boost: enabled Core speeds (MHz): 1: 1198 2: 1197              3: 2687 4: 2664 5: 2693 6: 2682 7: 1197 8: 2693 9: 2675 10: 2687 11: 2685 12: 2656              13: 2684 14: 2680 15: 2682 16: 2085 17: 2686 18: 2603 19: 2692 20: 2691 21: 2476 22: 1197              23: 2690 24: 2690 25: 2688 26: 2693 27: 2689 28: 2687 29: 2692 30: 2693 31: 2686 32: 2692              33: 2692 34: 2691 35: 2233 36: 2711 37: 2690 38: 2683 39: 2692 40: 2680 41: 2689 42: 2598              43: 2691 44: 2675 45: 2692 46: 2081 47: 2735 48: 2692 49: 2689 50: 2684 51: 2693 52: 2680              53: 2685 54: 2694 55: 2673 56: 2663 57: 2688 58: 2689 59: 2624 60: 2689 61: 2695 62: 1198              63: 2606 64: 2694 65: 2715 66: 2691 67: 2693 68: 2665 69: 2685 70: 2685 71: 2273 72: 2694              73: 2678 74: 2692 75: 2692 76: 2691 77: 2622 78: 1197 79: 2694 80: 2692 81: 2688 82: 2691              83: 2687 84: 2694 85: 2677 86: 2688 87: 2689 88: 2694 89: 2694 90: 1198 91: 1198 92: 2688              93: 2662 94: 2287 95: 2693 96: 2692 97: 2631 98: 2690 99: 2687 100: 1197 101: 2686              102: 2677 103: 2686 104: 2629 105: 2040 106: 2692 107: 2692 108: 2687 109: 2694 110: 2662              111: 2684 112: 2633 113: 2691 114: 2692 115: 2693 116: 1850 117: 2691 118: 1197 119: 2692              120: 2687 121: 2671 122: 2692 123: 2675 124: 2706 125: 2673 126: 2667 127: 2686 128: 2627   Graphics:  Device-1: Matrox Systems Integrated Matrox G200eW3 Graphics driver: mgag200 v: kernel              bus-ID: 03:00.0              Display: server: X.Org 1.20.11 driver: loaded: modesetting resolution: 1600x900~60Hz              OpenGL: renderer: llvmpipe (LLVM 11.1.0 256 bits) v: 4.5 Mesa 21.0.3 direct render: Yes   Audio:     Message: No device data found.              Sound Server-1: JACK v: 0.125.0 running: no              Sound Server-2: PulseAudio v: 14.2 running: yes              Sound Server-3: PipeWire v: 0.3.28 running: yes   Network:   Device-1: Intel I350 Gigabit Network vendor: Dell 4P I350-t rNDC driver: igb v: kernel              port: N/A bus-ID: 01:00.0              IF: eno1 state: down mac: b8:ca:3a:64:a4:b8              Device-2: Intel I350 Gigabit Network vendor: Dell 4P I350-t rNDC driver: igb v: kernel              port: N/A bus-ID: 01:00.1              IF: eno2 state: up speed: 1000 Mbps duplex: full mac: b8:ca:3a:64:a4:b9              Device-3: Intel I350 Gigabit Network vendor: Dell 4P I350-t rNDC driver: igb v: kernel              port: N/A bus-ID: 01:00.2              IF: eno3 state: down mac: b8:ca:3a:64:a4:ba              Device-4: Intel I350 Gigabit Network vendor: Dell 4P I350-t rNDC driver: igb v: kernel              port: N/A bus-ID: 01:00.3              IF: eno4 state: down mac: b8:ca:3a:64:a4:bb              Device-5: MYRICOM Myri-10G Dual-Protocol NIC driver: N/A port: N/A bus-ID: 44:00.0   Drives:    Local Storage: total: 92.6 TiB used: 8.6 TiB (9.3%)              ID-1: /dev/nvme0n1 vendor: Western Digital model: WDS100T2B0C-00PXH0 size: 931.51 GiB              temp: 29.9 C    Partition: ID-1: / size: 512 GiB used: 16.75 GiB (3.3%) fs: btrfs dev: /dev/nvme0n1p1              ID-2: /home size: 512 GiB used: 16.75 GiB (3.3%) fs: btrfs dev: /dev/nvme0n1p1   Swap:      ID-1: swap-1 type: partition size: 419.01 GiB used: 5.1 GiB (1.2%) dev: /dev/nvme0n1p2   Sensors:   System Temperatures: cpu: 40.5 C mobo: 0 C              Fan Speeds (RPM): N/A   Info:      Processes: 3927 Uptime: 1d 2h 36m Memory: 503.63 GiB used: 65.46 GiB (13.0%)              Init: systemd Compilers: gcc: 10.2.0 Packages: 1224 Shell: Bash v: 5.1.8 inxi: 3.3.04  

Why can't I arping the direct broadcast of the LAN to populate the arp table?

Posted: 23 May 2021 06:22 PM PDT

I'm trying to write a simple network discovery for my linux 2.6 router.

I'm testing arping which is bult inot busybox. I can't work out why sending a single request to the direct broadcast is not enough.

root@router:# arping -h  BusyBox v1.32.1 (2021-03-26 15:21:46 CET) multi-call binary.    Usage: arping [-fqbDUA] [-c CNT] [-w TIMEOUT] [-I IFACE] [-s SRC_IP] DST_IP    Send ARP requests/replies            -f              Quit on first ARP reply          -q              Quiet          -b              Keep broadcasting, don't go unicast          -D              Exit with 1 if DST_IP replies          -U              Unsolicited ARP mode, update your neighbors          -A              ARP answer mode, update your neighbors          -c N            Stop after sending N ARP requests          -w TIMEOUT      Seconds to wait for ARP reply          -I IFACE        Interface to use (default eth0)          -s SRC_IP       Sender IP address          DST_IP          Target IP address  

So at this point I try:

root@router:# arping -c1 -w1 -I br1 -s 10.10.11.5 10.10.11.255  ARPING 10.10.11.255 from 10.10.11.5 br1  Sent 1 probe(s) (0 broadcast(s))  Received 0 response(s) (0 request(s), 0 broadcast(s))  

What am I missing here? I would expect all the devices within the LAN to answer the arp request but it doens't seems to happen. The only alternative I'm left is to send one arping per possible IP but this is extremely memory consuming considering the small device.

So in a nutshell: how can I make a single arping command to request a full subnet to respond so that my arp table can be considered a reliable source of info when it comes to network mapping?

Thanks!

deploying on hosts with ansible based on yaml file

Posted: 23 May 2021 06:22 PM PDT

Developers are going to provide a yaml file with hosts in particular order (every deployment can differ, depend on needs) and each field in yaml file will have instructions for example install yum packages. I'm going to take this information and run ansible against every host with specific flags given in yaml file. What is the best practice in iterating through yaml file? Should I execute ansible-playbook against every field or should I use lookup function in ansible?

Jenkins Server throws 403 while accessing rest api or using jenkins java client to create job

Posted: 23 May 2021 05:02 PM PDT

I am trying to create a job on Jenkins using java client (https://github.com/jenkinsci/java-client-api) by calling .createJob(String jobName, String configXml) . However, Jenkins server throws 403 forbidden error.

Sample Code :

HttpClientBuilder builder = HttpClientBuilder.create();  JenkinsHttpClient client = new JenkinsHttpClient(uri, builder, "XXX", "XXX");  JenkinsServer jenkins = new JenkinsServer(client);  String sourceXML = readFile("src/main/resources/config.xml");  System.out.println(String.format("Installed Jenkins Version >> %s", jenkins.getVersion().getLiteralVersion()));//works and gives correct result  jenkins.createJob("test-nov1", sourceXML);  

Now, error I am getting :

Exception in thread "main" org.apache.http.client.HttpResponseException: status code: 403, reason phrase: Forbidden      at com.offbytwo.jenkins.client.validator.HttpResponseValidator.validateResponse(HttpResponseValidator.java:11)      at com.offbytwo.jenkins.client.JenkinsHttpClient.post_xml(JenkinsHttpClient.java:375)      at com.offbytwo.jenkins.JenkinsServer.createJob(JenkinsServer.java:389)      at com.offbytwo.jenkins.JenkinsServer.createJob(JenkinsServer.java:359)      at com.hcl.OffByTwoJenkins.main(OffByTwoJenkins.java:31)  

Jenkins Server security : When I select "Any user can do any thing", job creation is successful. However, when I select "Logged In user can do any thing", I am getting the above error. Moreover, even though I am sending correct user and password, with all permission to create job ( able to create a job using Jenkins web UI). What permission or setting change is required to achieve this.

Thanks

missing '=' etcd when defining service file

Posted: 23 May 2021 10:30 PM PDT

I'm struggling while following Kelsey Hightower's "Kubernetes the Hard Way" tutorial. I've gone off script, because I'm trying to bootstrap k8s on a local server.

I've got the point where I'm bootstrapping etcd, however, when I'm creating the service I'm getting an error:

Failed to start etcd.service: Unit is not loaded properly: Bad message.  See system logs and 'systemctl status etcd.service' for details.  

Checking the logs and I get:

Jun 21 20:16:49 controller-0 systemd[1]: [/etc/systemd/system/etcd.service:9] Missing '='.  Jun 21 20:16:49 controller-0 systemd[1]: [/etc/systemd/system/etcd.service:9] Missing '='.  Jun 21 20:17:25 controller-0 systemd[1]: [/etc/systemd/system/etcd.service:9] Missing '='.  

Here's the etcd.service file:

[Unit]  Description=etcd service  Documentation=https://github.com/coreos/etcd    [Service]  User=etcd  Type=notify  ExecStart=/usr/local/bin/etcd \\   --name ${ETCD_NAME} \\   --data-dir /var/lib/etcd \\   --initial-advertise-peer-urls http://${ETCD_HOST_IP}:2380 \\   --listen-peer-urls http://${ETCD_HOST_IP}:2380 \\   --listen-client-urls http://${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \\   --advertise-client-urls http://${ETCD_HOST_IP}:2379 \\   --initial-cluster-token etcd-cluster-1 \\   --initial-cluster etcd-1=http://192.168.0.7:2380 \\   --initial-cluster-state new \\   --heartbeat-interval 1000 \\   --election-timeout 5000  Restart=on-failure  RestartSec=5    [Install]  WantedBy=multi-user.target  

Linux: recover data from xfs

Posted: 23 May 2021 09:07 PM PDT

I have a broken XFS filesystem on one of my HDD. I ran xfs_repair which was not able to find a secondary superblock to repair the filesystem. Therefore, I am not able to mount the HDD/partition.

I tried to make a backup to a NTFS HDD via ddrescure to an iso-file. Unfortunately, I discovered now that my target drive is 4 KiB smaller than the source drive. That's why I was not able to complete the backup. ddrescure showed that there were actually no bad blocks or sectors on my HDD, which lets me assume, that my data is still there but I cannot access it.

I am doing this from a Live-Ubuntu-Stick, because I was not able to see/mount the HDD via Windows and some tools for this use case (mounting XFS in Windows).

Is there any way to access/recover my data from the incomplete image or directly from my HDD?

Edit: My out from xfs_repair /dev/sdc1

Phase 1 - find and verify superblock...  couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!!    attempting to find secondary superblock...    [then plenty of these lines]    found candidate secondary superblock...  unable to verify superblock, continuing...    [then it finishes with this]    Sorry, could not find valid secondary superblock  Exiting now.  

Non-domain joined clients unable to query DNS

Posted: 23 May 2021 05:02 PM PDT

I recently added a domain controller with DNS to our domain on a Windows Server 2016 Standard box. I changed the DNS Server from the scope options in DHCP to point to the new domain controller. On our Windows workstations joined to the domain everything works fine, and I confirmed that their DNS server was pointing to the new domain controller. They're able to resolve local and external DNS names.

Non-domain joined clients on the network don't seem to be able to resolve any DNS names. For example, on my iPhone the DNS server is pointing to the new domain controller with DNS, but I'm unable to resolve any internal or external DNS name. I can ping the DNS server from the client. If I change the DNS server back to the old DNS server everything works fine.

Again, windows workstations joined to the domain are behaving exactly as they should, but non-domain joined clients can't resolve any DNS names.

How could I go about debugging the issue?

Nginx reverse proxy with dynamic port forwarding

Posted: 23 May 2021 10:02 PM PDT

I'm setting up a reverse proxy on Nginx. I need it to listen to multiple ports. I then would like to hit the exact same port on the backend server. Like this: http://frontendserver:9000 -> http://backendserver:9000.

Here's what I thought would work

   ## server configuration      server {            listen 9000 ;          listen 9001 ;          listen 9002 ;          listen 9003 ;          listen 9004 ;          listen 9005 ;          listen 9006 ;          listen 9007 ;          listen 9008 ;          listen 9009 ;            server_name frontendserver;            if ($http_x_forwarded_proto = '') {              set $http_x_forwarded_proto  $scheme;          }            location / {                  proxy_read_timeout  900;                  proxy_pass_header   Server;                  proxy_cookie_path ~*^/.* /;                  proxy_pass         http://backendserver:$server_port/;                  proxy_set_header    X-Forwarded-Port  $server_port;                  proxy_set_header    X-Forwarded-Proto $http_x_forwarded_proto;                  proxy_set_header    Host              $http_host;                  proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;          }      }  

but, it gives me a 502 Bad Gateway error. Any clues why this is, or if there is another way of doing this that would work as explained above?

If i change:

proxy_pass         http://backendserver:$server_port/;  

to

proxy_pass         http://backendserver:9000/;  

it works just fine, that of course defeats the purpose...

How to connect to a vm on esxi by command line?

Posted: 23 May 2021 05:29 PM PDT

I want to connect to a vm running on an ESXi host by command line.

With VmWare Workstation, i can use this command to start an view a local vm:

vmware.exe -X -q <path>\MyVM.vmx  

With ESXi, i managed to connect to the host:

VpxClient.exe -i -s <adress> -u <user> -p <password>  

But, how can i connect directly to a vm running on that host?

zimbra export messages in tar.gz by ID

Posted: 23 May 2021 08:09 PM PDT

I need delete old messages from zimbra account.

by command:

zmmailbox -z -m mail@domain.com s -t message -l 999 "before:1/1/14" |awk '{ if (NR!=1) {print}}'| grep mess | awk '{ print $2 "," }' | tr -d '\n'  

I can recieve ID messages and I can delete message by ID

zmmailbox -z -m mail@domain.com deleteMessage $ID  

But between these two command, I would like to save the message in. tar.gz

pure-ftpd setup of pure-authd on Ubuntu (debian)

Posted: 23 May 2021 10:02 PM PDT

I am tryig to set up pure-ftpd on an Ubuntu 12.04 and have it work with the pure-authd. I have created a user and group, gotten the authd daemon running. I also have a script ready to go that will work fine doing the custom auth for the daemon. However, I can't see how to get pure-ftpd to use the authd authentication method. I see that the config options are set in /etc/pure-ftpd/conf as individual files with the values as the values of the settings. I have searched extensively and have not found out how to get pure-ftpd to use authd instead of one of the other auth options. Could anyone point me to what flag or setting to use to get this to work?


UPDATE

Here is what I have done so far to get things much further than before

  • Created an ftp user and an ftp group

sudo useradd -s /bin/bash -M -G ftpupload ftpupload

  • Ensured the following files were in /etc/pure-ftpd/conf
    • CreateHomeDir - contents: "yes"
    • ExtAuth - contents: [the path to the auth script]
  • Made sure ftp user group could read/write to /var/run/pure-ftpd to enable it to make socket and pid file
  • Created symlink from /etc/pure-ftpd/conf/ExtAuth to /etc/pure-ftpd/auth/ExtAuth

sudo ln -s /etc/pure-ftpd/conf/ExtAuth /etc/pure-ftpd/auth/ExtAuth

  • Removed other symlinks in /etc/pure-ftpd/auth
  • Sample pure-authd call:

sudo pure-authd -p /var/run/pure-ftpd/pure-authd.pid -u 1012 -g 1013 -s /var/run/pure- ftpd/pure-ftpd.sock -r /usr/bin/auth_script.sh

  • Sample /etc/init.d/pure-ftpd restart call

sudo /etc/init.d/pure-ftpd restart

Restarting ftp server: Running: /usr/sbin/pure-ftpd -l extauth:/var/run/pure-ftpd/pure-ftpd.sock -j -d -p 30000:35000 -E -u 1000 -O clf:/var/log/pure-ftpd/transfer.log -8 UTF-8 -B

Synology NAS - rsync messing up versioning / deduplication

Posted: 23 May 2021 04:04 PM PDT

Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?

Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.

Detailed info:

At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.

All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.

Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.

Large lag on mysql replication (Relay_Log_Pos and Exec_Master_Log_Pos does not increase)

Posted: 23 May 2021 09:07 PM PDT

Today my two slave's (one mysql 5.1 and second MariaDB 5.5, master is mysql 5.1) started lagging. Similar situation are quite often with lags rises to even 10000 seconds, because slaves have worse hardware configuration then master but now I'm quite stressed. Lags on both server are still rising and at this point it reches 25K seconds behind master. So I started investigating what is going wrong. Getting through mysql logs on master and slave gived me nothing. Servers are on Centos 5 and Mariadb is on Centos 6.

This is output from MariaDB slave status:

  MariaDB [(none)]> show slave status\G  *************************** 1. row ***************************                 Slave_IO_State: Waiting for master to send event                    Master_Host: masterserevr                    Master_User: slaveuser                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: mysqld-bin.006778            Read_Master_Log_Pos: 401041447                 Relay_Log_File: relay-bin.020343                  Relay_Log_Pos: 14867924          Relay_Master_Log_File: mysqld-bin.006777               Slave_IO_Running: Yes              Slave_SQL_Running: Yes                Replicate_Do_DB:             Replicate_Ignore_DB: ses,phar             Replicate_Do_Table:          Replicate_Ignore_Table: portal.aaa_jm_tmp,portal.newsletter        Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 0                     Last_Error:                    Skip_Counter: 0            Exec_Master_Log_Pos: 14867639                Relay_Log_Space: 1474785535                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: 26484  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 0                 Last_SQL_Error:     Replicate_Ignore_Server_Ids:                Master_Server_Id: 1  1 row in set (0.00 sec)  

From few outputs I noticed that Relay_Log_Pos and Exec_Master_Log_Pos does not increase. I tried to restart slave processes but that changed nothing and lags still increase. Next step was to see on what query replication has stopped.

Using mysqlbinlog

  mysqlbinlog relay-bin.020343 > /root/RelayLogQueries1.txt  

In RelayLogQueries1.txt I founded position 14867924:

  # at 14867924  #130927 10:03:21 server id 1  end_log_pos 14867709      Query   thread_id=160780134     exec_time=3     error_code=0  SET TIMESTAMP=1380269001/*!*/;  /*!\C utf8 *//*!*/;  SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=9/*!*/;  BEGIN  /*!*/;  # at 14867994  # at 14868101  # at 14868669  # at 14869417  # at 14869873  # at 14870663  # at 14871697  # at 14872055  # at 14872845  # at 14873747  # at 14874591  # at 14875387  # at 14876265  # at 14877039  # at 14877985  # at 14878299  # at 14879091  # at 14879853  # at 14880255  # at 14881029  .  .  .  # at 117398235  # at 117399219  # at 117400203  # at 117401191  # at 117402179  # at 117403167  # at 117403969  # at 117404957  # at 117405945  # at 117406933  # at 117407921  # at 117408909  # at 117409897  # at 117410885  # at 117411873  # at 117412861  # at 117413849  # at 117414837  # at 117415785  # at 117416797  # at 117417839  # at 117418595  # at 117419585  #130927 10:03:21 server id 1  end_log_pos 14867816      Table_map: `test`.`pac_list` mapped to number 216570427  #130927 10:03:21 server id 1  end_log_pos 14868384      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14869132      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14869588      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14870378      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14871412      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14871770      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14872560      Update_rows: table id 216570427  #130927 10:03:21 server id 1  end_log_pos 14873462      Update_rows: table id 216570427  .  .  .  

Now I'm confused because first I have no idea how to interpret this log (is it ok or wrong), and second don't know how to fix this.

Sometimes when I get some replication errors this trick was helpfull :

  SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; START SLAVE;  

But now I have no errors and both IO and SQL slave proceses are running.

Could setting SQL_SLAVE_SKIP_COUNTER=1 bring back replication on??

What can I do to diagnose more this problem and fix it without setting replica from scratch (that last one scenario I want avoid )

EDIT: The lag startet when one of developers accidentaly copied one of tables pac_list (200MB with 600000 records) and he copy named it test.pac_list (it has dot in table name) he want to create copy in database test but he did something wrong and createt table test.pac_list in the same database the orginal table is. After he find out his mistake he droped table test.pac_list and created tables pac_list in new database. Could this be reason of such big lag?

Tuning Garbage Collection in Apache Tomcat

Posted: 23 May 2021 06:07 PM PDT

I have the following parameters in tomcat6.conf

JAVA_OPTS="-server -Xmx6144m -Xms3072m -XX:+UseConcMarkSweepGC -XX:MaxGCPauseMillis=999 -XX:ReservedCodeCacheSize=128m -XX:MaxPermSize=256m -Djava.awt.headless=true -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname= -Djava.rmi.server.useLocalHostname=false"   

but at peak time I see the following regularly,

ERROR memory-watcher - used 87.73865761212004%, max 6403194880 reaping at priority CRITICAL  

is there any parameter I can use to to tune tomcat performance or GC ?

Packets not entering FORWARD chain

Posted: 23 May 2021 06:07 PM PDT

first of all, this is not an every-day routing issue. The setup is fairly complex, so let me state it before.

I got a router with, lets keep it simple, 3 interfaces. eth0, eth1, eth2. eth2 is used for pppoe. eth0 & eth1 have the clients.

Okay so far so good, all basic.. Now here comes the tricky thing: I create a bunch of macvlan-interfaces on top of eth0 and eth1, the name schema is:

g1eth0 : g1 for gate1, eth0 indicates on what physical interface its laying on  

This I got for every uplink I provide, lets say 3, 1 pppoe and 2 VPNs. These are then merged into bridges named after the gate.

So far we got these interfaces:

<iface>:<description>  eth0   : our 1st subnet is here  eth1   : our 2nd subnet is here  eth2   : our pppoe is hooked here  ppp0   : our pppoe uplink  tap0   : our vpn1 uplink  tap1   : our vpn2 uplink  g1eth0 : advertised gate over uplink1 on clients in eth0  g1eth1 : advertised gate over uplink1 on clients in eth1  g2eth0 : advertised gate over uplink2 on clients in eth0  g3eth1 : advertised gate over uplink3 on clients in eth1  gate1  : bridge containing g1eth0 and g1eth1  gate2  : bridge containing g2eth0  gate3  : bridge containing g3eth1  

As I said, a bunch of interfaces... Notice that an uplink can be advertised over several physical interfaces, thats why we got the bridges.

Alright now lets take a look at the routing rules:

32763:  from all fwmark 0x3 lookup 202  32764:  from all fwmark 0x2 lookup 201  32765:  from all fwmark 0x1 lookup 200  

Okay this is not so spectacular, obviously, it only checks what FWMARK a pkg has and pushes it to the according table.

The routing tables:

200: default via 1.2.3.4 dev ppp0 src 4.3.2.1

201: default via 5.6.7.8 dev tap0 src 8.7.6.5

202: default via 9.10.11.12 dev tap1 src 12.11.10.9

Okay the IPs are just for to fill the gaps, you should be familiar with the syntax ;)

Right now we got the routing tables, routing rules and the interfaces - but we're missing out the pkg marking, so this is being done in iptables:

iptables -t mangle -A PREROUTING -i gate1 -s 10.0.0.0/16 -j MARK --set-xmark 0x1/0xffffffff  iptables -t mangle -A PREROUTING -i gate2 -s 10.0.0.0/16 -j MARK --set-xmark 0x2/0xffffffff  iptables -t mangle -A PREROUTING -i gate3 -s 10.0.0.0/16 -j MARK --set-xmark 0x3/0xffffffff  

Okay for explanation, we mark all pkgs comming in our bridges with the right value for the routing rules.

Now I also had to do some tweaks in arp_announce and arp_ignore so that the right MAC is being advertised for the g*eth*-interfaces. This post is getting rather full, so I will skip describing it, both are set to 2.

The filter:FORWARD chain is empty for now, it just logs the pkgs it gets.

Now NAT'ing: iptables -t nat -A POSTROUTING -s 10.0.0.0/16 -j MASQUERADE.

All default policies for iptables are ACCEPT.

tcpdump shows that the incomming pkgs are directed to the right MAC according to the g*eth*-interfaces.

mangle:PREROUTING counters for the rules increment as they should.

ip_forward verified to be 1.

filter:FORWARD counters are NOT incrementing.

I got LOG rules in every chain, but the pkgs seem to vanish once passed the mangle:PREROUTING.

Any ideas why?

Addition I: I placed a TRACE rule in PREROUTING as the comment suggested me, ironically it doesn't show any of the pings my clients are running.

Addition II: After some playing around with the rules,tracing,promisc,... I noticed that I see the data getting in on ethX but not on gateX. Seems like the brigde-interface is just dropping it, no wonder the kernel cant get it into forward.

Why does my bridge-interface do this?

bridge name     bridge id               STP enabled     interfaces  gate1           8000.dead000200b5       no              g1eth0                                                          g1eth1  

large number of InnoDB tables plus SHOW TABLE STATUS

Posted: 23 May 2021 08:09 PM PDT

We've got several hundred InnoDB tables in a database, and we use phpMyAdmin to manage them. Unfortunately, phpMyAdmin does a SHOW TABLE STATUS query whenever the list of tables is shown, and this seems to dig into each InnoDB table to get an approximate row count.

This seems to lock up the entire database, which subsequently means all other queries to this (busy) database all queue up until the database hits the max users.

  1. Can SHOW TABLE STATUS be sped up in a reasonable manner?
  2. Can phpMyAdmin be easily modified to not do a full SHOW TABLE STATUS query, or at least not lock the entire database at once for it?

SIOCSIFFLAGS: Resource or Device Busy

Posted: 23 May 2021 04:04 PM PDT

Have a Dell PowerEdge 2350 with dual nics built into the motherboard. eth0 works fine. Setting up an IP under eth1 results in the error: "SIOCSIFFLAGS: Resource or Device Busy".

I have two identical 2350s and get the same error on eth1 for both servers. The server OS is CentOS.

Help greatly appreciated.

AppCmd backup for IIS7 gives access denied error (hresult:80070005)

Posted: 23 May 2021 08:50 PM PDT

I have a script I have been using on another Windows 2008 to delete the IIS7 backup of configs and create a fresh one:

SET DEST=C:\Backup\Web\IIS7  SET BACKUPNAME=IIS7-CONFIGS  %windir%\system32\inetsrv\appcmd.exe delete backup "%BACKUPNAME%"  %windir%\system32\inetsrv\appcmd.exe add backup "%BACKUPNAME%"  robocopy %windir%\system32\inetsrv\backup "%DEST%" /MIR /R:6 /W:10 /ZB  

But on a new Windows 2008 server, I get an access denied on the delete:

ERROR ( hresult:80070005, message:Command execution failed.  Access is denied.   )  

I have UAC turned off and pretty much copied all the settings from the old server (including user role being an admin). What am I missing?

What methods are available for updating a non-Internet-connected VMWare ESXi host?

Posted: 23 May 2021 09:34 PM PDT

I have a stand-alone installation of VMWare vSphere Essentials, with a vCenter Server and 3 ESXi 4.0 host servers. The environment is intended to remain as a stand-alone network, with the exception that I can "float" a workstation or server between the 'Net and the VMWare network for patches and maintenance.

With other installations, where the Internet is available, I've used the vSphere Host Update utility to connect to VMWare and then apply the patches to the ESXi hosts.

My problem is that this utility does not seem to function if it cannot connect to both VMWare and the ESXi host at the same time, as the scan for patches function will not scan the server without connecting to VMWare's site to sync its repository first. Even if I sync it, disconnect from the 'Net and connect to the VMWare network, it still won't scan hosts for required patches -- it will prompt for syncing with VMWare and if you click No to syncing, the scan does not occur.

Does anyone know of other options for updating the ESXi hosts in some automated fashion? I believe I can manually pull down required patches and apply them, but this will not scale well, and in the future I'm sure I'll want something a bit more scalable.

No comments:

Post a Comment