Thursday, March 31, 2022

OSCHINA 社区最新专区文章

OSCHINA 社区最新专区文章


业务并发度不够,数仓的CN可以来帮忙

Posted: 30 Mar 2022 06:52 PM PDT

摘要: CN全称协调节点(Coordinator Node),是和用户关系最密切也是DWS内部非常重要的一个组件,它负责提供外部应用接口、优化全局执行计划、向Datanode分发执行计划,以及汇总、处理执行结果。 本文分享自华为云社区《CN与业务并发度的关系-业务并发度不够?CN来帮忙》,作者:闻鲜生 。 一、CN的作用是什么? CN全称...

全方位讲解 Nebula Graph 索引原理和使用

Posted: 30 Mar 2022 10:46 PM PDT

> 本文首发于 **[Nebula Graph Community 公众号](https://nebula-website-cn.oss-cn-hangzhou.aliyuncs.com/nebula-blog/WeChatOffical.png)** ![全方位讲解 Nebula Graph 索引原理和使用](https://www-cdn.nebula-graph.com.cn/nebula-blog/use-better-know-something-about-nebula-graph-index.jpg) `index not foun...

TiDB 在携程 | 实时标签处理平台优化实践

Posted: 29 Mar 2022 07:28 PM PDT

业务挑战 在国际业务上,由于面临的市场多,产品和业务复杂多样,投放渠道多,引流费用高,因此需要对业务和产品做出更精细化的管理和优化,满足市场投放和运营需要,降低整体成本,提高运营效率与转化率。为此,携程专门研发了国际业务动态实时标签化处理平台(以下简称 CDP )。 携程旅行的数据具有来源广泛、形式多样...

TiDB HTAP 遇上新能源车企:直营模式下实时数据分析的应用实践

Posted: 31 Mar 2022 02:40 AM PDT

无论在股市还是车市上,新能源汽车早已站在了舞台中央。在一台台爆款新车的背后,是造车新势力们产品力和技术力的强强联手,更是数字营销和直营的绝妙组合。早在 2021 年,造车新势力们就已基本完成了销量的"原始积累"。根据各品牌的官方数据,以"蔚小理"为代表的造车新势力 Top3 年销量均已突破 9 万台,无限接近于...

TDengine 助力国产芯片打造“梦芯解算”,监测地质灾害 24 小时无间断

Posted: 28 Mar 2022 02:30 AM PDT

小 T 导读:TDengine 承担着梦芯形变安全监测解算系统核心数据库的角色,帮助梦芯刻印机解决了高效率记录站点的原始数据、解算后的形变数据等海量数据的存取及使用上的巨大难题。本文分享了基于本项目进行数据库选型、搭建和实际效果的具体经验。 公司简介 武汉梦芯科技是一家专业从事高集成度芯片设计和高性能室内外定位...

分布式数据库Snapshot快照功能实现解密

Posted: 27 Mar 2022 06:54 PM PDT

通常,我们对数据库进行误操作后,需要把数据库Rollback到之前的版本。一个常用的方法就是,使用日志来进行数据库恢复。这个方法虽然强大有效,但是花费时间等成本较高。而数据库表快照(Snapshot)功能,可以在某些时间点为数据表创建快照,保护快照时间点的数据不被修改,并可根据需要快速恢复快照点数据,从而达到高效...

特性更新!DistSQL 集群治理能力详解

Posted: 25 Mar 2022 11:13 PM PDT

江龙滔,SphereEx 中间件研发工程师,Apache ShardingSphere Committer。 主要负责 DistSQL 及安全相关特性的创新与研发。 兰城翔,SphereEx 中间件研发工程师,Apache ShardingSphere Committer。 目前专注于 DistSQL 的设计和研发。 背景 从 Apache ShardingSphere 5.0.0-Beta 版本发布以来,DistSQL 迅速走进了用户的...

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


Google Analytics: Disparity between pageviews per user from Realtime vs Final reports?

Posted: 31 Mar 2022 07:38 AM PDT

Our article was posted on a twitter with a large following and inside the Realtime overview we were getting 5 pageviews per user.

Here is a screenshot I took: https://i.ibb.co/DCFNj5v/Inked-Weird-analytics-LI.jpg

And this happened all day fluctuating between 3x to 5x the amount of pageviews for that article compared to users

First I thought it was a viewbot trying to exaggerate the views. but today, I have looked at the final analytics for that page from yesterday and it shows around 2100 pageviews from 2000 users. So that article now has a 1.13 pageviews per user which seems a more normal value.

So why is there such a big discrepancy between the Realtime and the final analytics? Is this a normal thing that happens?

Why is my getter returning null even though the object it is supposed to return exists?

Posted: 31 Mar 2022 07:38 AM PDT

My problem is simple, but I can't find what is wrong.

For context, I'm trying to make a "game", so I can exercise a little. Using Java 17 and Eclipse IDE.

I thought having the player stored in the class that will take control of the game would be a nice idea, so I created a private variable for the player. As it is private, I simply made a simple getter:

public Player getPlayer() {                return this.player;            }  

The problem is: for some reason, this method does not recognize the variable and simply return null. I used a System.out.println() in order to check if it was a problem on the instantiation of the player:

public class GameManager {            private GamePanel panel;      private Player player;            private LinkedList<GameObjects> gameObjects;            public GameManager(Game game) {                    this.panel = game.gameWindow.panel;          this.panel.setManager(this);          this.gameObjects = new LinkedList<>();          LevelManager.createLvl(gameObjects);          player = new Player(0, 0, 1 * GamePanel.TILES_SIZE, 1 * GamePanel.TILES_SIZE);          addGameObject(player);                    System.out.println(player == null);                }   

It outputs false, besides I can see the player on the screen:enter image description here

I've already tried making the variable public and calling it directly or using this., but the results weren't different.

Of course, I can do it differently, but I want to know what was the problem. I've done it so much times and can't see any nuances between my previous codes and this one. Am I stupid or is it a specific feature of Java that I don't know?

How to propely implement Google Adsense with Rails

Posted: 31 Mar 2022 07:38 AM PDT

In my application.html.erb,

<html>    <head>       <script data-ad-client="ca-pub-64xxxxxxxxxxx" async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>    </head>      <body>       <%= yield %>    </body>  </html>  

but with this way, all pages would contain Adsense scripts.

I wanna exclude Adsense in some pages.

So I tried it

<html>    <head>       <%if @hasAdsense == true%>       <script data-ad-client="ca-pub-64xxxxxxxxxxx" async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>       <%end%>    </head>      <body>       <%= yield %>    </body>  </html>  

I don't know this is proper way to do it. How should I implement Google Adsense with Rails?

Generic Constraint Contravariant Incompatibility in TypeScript

Posted: 31 Mar 2022 07:38 AM PDT

I'm trying to model tagged unions as data.

First I create a Tagged utility type, to represent anything intersected with the tag field:

type Tagged<T, With extends PropertyKey> = T & { _tag: With };  

Then I create a representation of a given type.

class TypeRep<T = any> {    T!: T;    constructor(readonly x: (value: T) => void) {}  }  

Our representation of fields extends the TypeRep like so:

class FieldRep<    Key extends PropertyKey = PropertyKey,    Value extends TypeRep = TypeRep,  > extends TypeRep<Record<Key, Value["T"]>> {}  

Records:

class RecordRep<FieldEncoders extends FieldRep[]>    extends TypeRep<UnionToIntersection<FieldEncoders[number]["T"]>>  {}  

And finally, our tagged union type representation:

class TaggedUnionRep<    Tag extends PropertyKey = PropertyKey,    FieldReps extends FieldRep[] = FieldRep[],  > extends TypeRep<Tagged<RecordRep<FieldReps>["T"], Tag>> {}  

This is all well and good it seems... except that we cannot assign a narrow TaggedUnionRep instance to the widened type :/

declare const a: TaggedUnionRep<"A", []>;  const x: TaggedUnionRep<PropertyKey, FieldRep[]> = a;  

Surely enough, we get the following contravariance error beneath x:

Type 'TaggedUnionRep<"A", []>' is not assignable to type 'TaggedUnionRep<PropertyKey, FieldRep<PropertyKey, TypeRep<any>>[]>'.    Types of property 'x' are incompatible.      Type '(value: { _tag: "A"; }) => void' is not assignable to type '(value: Tagged<Record<PropertyKey, any>, PropertyKey>) => void'.        Types of parameters 'value' and 'value' are incompatible.          Type 'Tagged<Record<PropertyKey, any>, PropertyKey>' is not assignable to type '{ _tag: "A"; }'.            Types of property '_tag' are incompatible.              Type 'PropertyKey' is not assignable to type '"A"'.                Type 'string' is not assignable to type '"A"'.(2322)  

I'd be greatly appreciative of any tips on best approach to constraining FieldReps (ideally not through widening Tag to any).

Thank you!

Uboot variables don't have a default value

Posted: 31 Mar 2022 07:38 AM PDT

I had created an image by using yocto for beableboneblack with help of the layer of TI. I have noticed, none of uboot variables has a default value. Does anyone know why? Thank you

utf-8' codec can't decode byte 0x8d

Posted: 31 Mar 2022 07:38 AM PDT

Since I'm new to python, I watched a tutorial on the internet. I understood the code and wanted to run it on my own computer. But I encountered this error.

import subprocess    import re    command_output = subprocess.run(["netsh", "wlan", "show", "profiles"], capture_output = True).stdout.decode()    profile_names = (re.findall("All User Profile     : (.*)\r", command_output))    wifi_list = []    if len(profile_names) != 0:      for name in profile_names:                    wifi_profile = {}                    profile_info = subprocess.run(["netsh", "wlan", "show", "profile", name], capture_output = True).stdout.decode()                    if re.search("Security key           : Absent", profile_info):              continue          else:                            wifi_profile["ssid"] = name                            profile_info_pass = subprocess.run(["netsh", "wlan", "show", "profile", name, "key=clear"], capture_output = True).stdout.decode()                            password = re.search("Key Content            : (.*)\r", profile_info_pass)                            if password == None:                  wifi_profile["password"] = None              else:                                    wifi_profile["password"] = password[1]                            wifi_list.append(wifi_profile)     for x in range(len(wifi_list)):      print(wifi_list[x])   

Giving error like that. Although I understand the error a little. I don't know how to do that. Like I said, I'm fairly new to python and programming.

File "c:\Users\ASUS\Desktop\wifi_passwords.py", line 27, in <module>      command_output = subprocess.run(["netsh", "wlan", "show", "profiles"], capture_output = True).stdout.decode()  UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8d in position 599: invalid start byte  

Return statement doesn't stop the method, C#, unity 2D

Posted: 31 Mar 2022 07:38 AM PDT

I'm working on a chess game and I'm trying to make a class that checks if a position is in check or not. Whithin that class I have a function that goes through squares whithin an array, to verify if that square is safe or not. Here's my code :

    int x = xPosition;      int y = yPosition;        y += yIncriment;      x += xIncriment;      while (gameScript.PositionOnBoard(x, y) && gameScript.piecesArray[x, y] == null)      {          y += yIncriment;          x += xIncriment;      }      if(!gameScript.PositionOnBoard(x, y))      {          return false;      }       if(gameScript.piecesArray[x, y].GetComponent<PieceManager>().player == ps.player)      {          return false;      }       if(gameScript.piecesArray[x, y].GetComponent<PieceManager>().name == "queen" || gameScript.piecesArray[x, y].GetComponent<PieceManager>().name == "bishop")      {          return true;      }  return false;  

The problem that i'm having right now is that my function does not stop when the position is no longer on the board. The bug occurs when my script tries to get a property of a null object, that's why i used the "return false" to stop the method, but it goes through it anyways.

Pip error when installing spectral-cube (astropy)

Posted: 31 Mar 2022 07:37 AM PDT

I'm trying to install the package spectral-cube from the astropy project using pip (22.0.4). I get a long error which ends with this:

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for casa-formats-io Failed to build casa-formats-io ERROR: Could not build wheels for casa-formats-io, which is required to install pyproject.toml-based projects

Even after downloading casa-formats-io from their github I still get this error. I've also tried installing this using conda, but the error is the same.

I'm kind of confused as to what I can do. Thanks for the help!

What does this stray else: continue in my parser code

Posted: 31 Mar 2022 07:37 AM PDT

I want to refactor this parser function into smaller parts. But there is a else: continue that is essential for the try block to work and I just can't figure out why it is even there. In the code below I could extract the parse_trkpts() and parse_time() functions successfully, but not parse_elevation(). Instead I get "continue" can be used only within a loop.

def parse_gpx(xml_file: str, df_cols: tuple) -> pd.DataFrame:      rows: list[dict] = []      track_values: list = []      for event, elem in ETree.iterparse(xml_file, events=("start", "end")):          tag_names = strip_namespaces(elem.tag)          if event == "start":              track_values = parse_time(elem, tag_names, track_values)              parse_trkpts(df_cols, track_values, elem, tag_names)                if tag_names == "ele":                  if elem.text is not None:                      track_values.append(                          (float(elem.text)),                      )                  else:                      track_values.append(None)              else:        # the block in question                  continue # why does this work inside the function but not outside?                 try:                  rows.append(                      {                          df_cols[itervar]: track_values[itervar]                          for itervar, _ in enumerate(df_cols)                      },                  )              except IndexError as ie:                  log.error(f"Not enough data in {Path(xml_file).name}, skipping that one... {ie}")        return pd.DataFrame(rows, columns=df_cols)  

To understand my error I would like to know to what this continue is reacting to and why it makes the code work properly even though it seems to be used wrong. It makes the try block print the error message only when running into an IndexError. If I use pass instead or just completely omit the else: continue I get the IndexError message from the try block thousands of times while still getting the same dataframe as a result (eventually).

for reference parse_trkpts() looks like this:

def parse_trkpts(df_cols, track_values, elem, tag_names):      if tag_names == "trkpt":          track_values.extend(              [                  float(elem.attrib[df_cols[2]]),                  float(elem.attrib[df_cols[1]]),              ],          )  

and my attempt of a parse_elevation() looks like this:

def parse_elevation(track_values, elem, tag_names):      if tag_names == "ele":          if elem.text is not None:              track_values.extend(                  [                      (float(elem.text)),                  ],              )          else:              track_values.extend(                  [                      None,                  ],              )  

How can I display sequential number starting to 8 and end to 131072 in two dimensional array using c language

Posted: 31 Mar 2022 07:37 AM PDT

#include <stdio.h>    int main ()  {  int num,x,y,a[3][5];        for (x=0; x<=2; x++)      {                for (y=0; y<=4; y++)          {                    if (a[0][0] == 8)              {                  num = 4;                  num = num * 2;              }          }          for (x=0; x<=2; x++)          {                for (y=0; y<=4; y++)          {              if (a[x][y] == num)              {              }              printf("%d\t",num);          }              printf("\n");          }      }    }  

This is my code but it did not let me output the sequential number starting from 8 and so on like this

8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 131072  

Django: Calculate and Group inside Django ORM with multiple columns

Posted: 31 Mar 2022 07:38 AM PDT

Good day,

right now I'm trying to improve my knowledge about Django ORM but struggling with the task below:

But first, the database looks like this:

class DeathsWorldwide(models.Model):      causes_name = models.CharField(max_length=50, null=False, blank=False)      death_numbers = models.PositiveIntegerField(default=0)      country = models.CharField(max_length=50, null=True, blank=True)      year = models.PositiveIntegerField(null=False, blank=False, validators=[MinValueValidator(1990), MaxValueValidator(2019)])      causes_name    |    death_numbers    |    country    |    year  Alcohol dis.   |    25430            |    Germany    |    1998  Poisoning      |    4038             |    Germany    |    1998  ...  Maternal dis.  |    9452             |    Germany    |    1998  Alcohol dis.   |    21980            |    Germany    |    1999  Poisoning      |    5117             |    Germany    |    1999  ...  Maternal dis.  |    8339             |    Germany    |    1999  

Always a block of all diseases for each year, every country and so on...The range of years goes from 1990 to 2019.

What I - or rather let's say the task - want to achieve, is a list of all countries with calculated numbers of deaths, like this...

country    |    death_numbers  France     |    78012  Germany    |    70510  Austria    |    38025  

...but with one additional feature: the number of deaths for each country between 1990-1999 must be subtracted from those of 2000-2019. So a full list would actually look something like this:

country    |    death_numbers    |    19xx    |    2xxx  France     |    78012            |    36913   |    114925  Germany    |    70510            |    ...     |    ...  Austria    |    38025            |    ...     |    ...  

Is it possible to achieve such a result with only one query?

Thanks for your help and have a great day!

How to remove all entries of a specific ID after a binary variable becomes true in Pandas?

Posted: 31 Mar 2022 07:38 AM PDT

Suppose we have the following already sorted dataset:

ID Dead  01    F  01    F  01    T  01    T  01    T  02    F   02    F  02    F  02    F  02    T  03    T  03    T  03    T  03    T  03    T  

We have 3 IDs (01, 02, and 03) and whether the individual is dead (True or False). I want the indices where the individuals are alive and the initial row when the individual died, which would leave me with the following dataset:

    ID Dead   0  01    F   1  01    F   2  01    T   5  02    F    6  02    F   7  02    F   8  02    F   9  02    T  10  03    T  

I came up with a solution that involves looping over all rows and appending the ID to a list if they have died previously. Is there a quicker approach?

Edit: It also has to be in order. Data is not "perfect", for example, we might have the following dataset:

ID Dead  04    F  04    T  04    F  04    F  04    F  

And the desired output is:

ID Dead  04    F  04    T  

trouble using random effect with h2o.glm

Posted: 31 Mar 2022 07:37 AM PDT

I would like to use h2o for glm regression but with random effects. I do not manage to make it work yet.

Is here my working example: I define a dataset with Simpson paradox: a global increasing trend, but a decreasing trend in each group

library(tidyverse)  library(ggplot2)  library(h2o)  library(data.table)    global_slope <- 1  global_int <- 1    Npoints_per_group <- 50  N_groups <- 10  pentes <- rnorm(N_groups,-1,.5)    centers_x <- seq(0,10,length = N_groups)  center_y <- global_slope*centers_x + global_int    group_spread <- 2    group_names <- sample(LETTERS,N_groups)    df <- lapply(1:N_groups,function(i){    x <- seq(centers_x[i]-group_spread/2,centers_x[i]+group_spread/2,length = Npoints_per_group)    y <- pentes[i]*(x- centers_x[i])+center_y[i]+rnorm(Npoints_per_group)    data.table(x = x,y = y,ID = group_names[i])  }) %>% rbindlist()  

You can recognize something similar to the example of the wiki page of Simpson paradox:

ggplot(df,aes(x,y,color = as.factor(ID)))+    geom_point()  

enter image description here

The linear regression without random effect sees the increasing trend:

lm(y~x,data = df) %>%   summary()    Coefficients:              Estimate Std. Error t value Pr(>|t|)      (Intercept)  1.28187    0.13077   9.803   <2e-16 ***  x            0.94147    0.02194  42.917   <2e-16 ***  

A standard multilevel regression would look like that:

library(lme4)  library(lmerTest)    lmer( y ~ x + (1+x|ID) ,data = df) %>%     summary()  

And would estimate properly a decreasing trend:

Fixed effects:              Estimate Std. Error      df t value Pr(>|t|)      (Intercept)  11.7192     2.6218  8.8220   4.470 0.001634 **   x            -1.0418     0.1959  8.9808  -5.318 0.000486 ***  

I create a numerical version of my group variable

df[,ID2 := as.numeric(as.factor(ID))]  

Now I test with h2o:

library(h2o)  h2o.init()    df2 <- as.h2o(df)  test_glm <- h2o.glm(family = "gaussian",                          x = "x",                          y = "y",                          training_frame = df2,                          lambda = 0,                          compute_p_values = TRUE)  test_glm  

And it works well, similar to the linear model above:

Coefficients: glm coefficients        names coefficients std_error   z_value  p_value standardized_coefficients  1 Intercept     1.281868  0.130766  9.802785 0.000000                  5.989232  2         x     0.941473  0.021937 42.916536 0.000000                  3.058444  

But when I want to use random effects:

test_glm2 <- h2o.glm(family = "gaussian",                       x = "x",                       y = "y",                       training_frame = df2,                       random_columns = "ID2",                       lambda = 0,                       compute_p_values = TRUE)    

I got

Error in .h2o.checkAndUnifyModelParameters(algo = algo, allParams = ALL_PARAMS, : vector of random_columns must be of type numeric, but got character.

Even if I force df2$ID2 <- as.numeric(df2$ID2).

What Am I doing wrong? What is the proper way to find something similar to the mixed effect model with lmer (i.e. random slope and intercept)?

How to compare type while scripting in abaqus with python

Posted: 31 Mar 2022 07:38 AM PDT

I'm trying to compare type of abaqus object, but it doesn't work. An idea ?

>>> m = mdb.models['Model-1']  >>> type(m)  <type 'Model'>  >>> type(m) == Model  NameError: name 'Model' is not defined  

I want those type of comparison or many other object, not only Model

I tried :

>>> import Model  ImportError: No module named Model  >>> type(m) == mdb.models.Model  AttributeError: 'Repository' object has no attribute 'Model'  

"Unexpected character encountered while parsing value: S. Path '', line 0, position 0."

Posted: 31 Mar 2022 07:38 AM PDT

I know there are already a lot of posts about this, but I couldn't find one to fix my problem. Btw I already checked if the encoding is wrong. Also, sometimes the Exception throws and sometimes not. If it doesn't throw, "amd" is "Nothing" in the debugger.

Here is my code to deserialize the json-File

    If OpenfilePath IsNot Nothing Then          Dim myStreamReader As New StreamReader(OpenfilePath)          Dim amd = JsonConvert.DeserializeObject(Of RootObject())(myStreamReader.ToString) 'That's where the Exception appears      End If  

Here is the Root Object Class (the class was automatically created by VS, i just added the JsonProperties and changed the name of the Public Properties):

Public Class RootObject    Public Class Rootobject      <JsonProperty(PropertyName:="Artikelstammdaten")>      Public Property ArticleMasterData() As ArticleMasterData      <JsonProperty(PropertyName:="Stueckliste")>      Public Property MaterialCosts() As MaterialCosts      <JsonProperty(PropertyName:="Arbeitsgaenge")>      Public Property ManufacturingCosts() As ManufacturingCosts  End Class    Public Class ArticleMasterData      <JsonProperty(PropertyName:="Artikelnummer")>      Property VPartNumber() As String      <JsonProperty(PropertyName:="BezeichnungDE")>      Property DesignationDE() As String      <JsonProperty(PropertyName:="BezeichnungEN")>      Property DesignationEN() As String      <JsonProperty(PropertyName:="Einheit")>      Property Unit() As String      <JsonProperty(PropertyName:="Mat_Grp")>      Property MatGrp() As String      <JsonProperty(PropertyName:="Kostenart")>      Property CostType() As Integer      <JsonProperty(PropertyName:="VertriebstextDE")>      Property SalesTextDE() As String      <JsonProperty(PropertyName:="VertriebstextEN")>      Property SalesTextEN() As String      <JsonProperty(PropertyName:="Stueckliste")>      Property MaterialCosts() As String      <JsonProperty(PropertyName:="Status")>      Property Status() As String      <JsonProperty(PropertyName:="Klasse")>      Property ClassName() As String      <JsonProperty(PropertyName:="Mantelflaeche")>      Property Sheathing() As Double      <JsonProperty(PropertyName:="Gewicht")>      Property Weight() As Double      <JsonProperty(PropertyName:="KlasseID")>      Property ClassID() As String  End Class    Public Class MaterialCosts      <JsonProperty(PropertyName:="Verkaufsartikel")>      Property VPartNumber() As String      <JsonProperty(PropertyName:="Position")>      Property Position() As Integer      <JsonProperty(PropertyName:="PosArtikel")>      Property PosVpart() As String      <JsonProperty(PropertyName:="PosBezeichnung")>      Property PosDesignation() As String      <JsonProperty(PropertyName:="PosKostenart")>      Property PosCostType() As Integer      <JsonProperty(PropertyName:="Datum")>      Property FiledDate() As Date      <JsonProperty(PropertyName:="Material")>      Property Material() As Double      <JsonProperty(PropertyName:="GMK")>      Property GMK() As Double      <JsonProperty(PropertyName:="Lohn")>      Property Wage() As Double      <JsonProperty(PropertyName:="Menge")>      Property Unit() As Double      <JsonProperty(PropertyName:="Mengeneinheit")>      Property UnitOfMeasure() As String  End Class        Public Class ManufacturingCosts          <JsonProperty(PropertyName:="Verkaufsartikel")>          Property VPartNumber As String          <JsonProperty(PropertyName:="AGNR")>          Property AGNR As Integer          <JsonProperty(PropertyName:="Bereich")>          Property Area As String          <JsonProperty(PropertyName:="Lohn")>          Property Wage As Double          <JsonProperty(PropertyName:="Kostenstelle")>          Property CostType As Integer          <JsonProperty(PropertyName:="Zeit")>          Property Time As Double          <JsonProperty(PropertyName:="ARBPLATZ")>          Property Workplace As String      End Class    End Class  

And my json-File:

    {      "Artikelstammdaten": [{"Artikel": ["VAUBEF0010", "VAUBEF0011", "VAUBEF0015", "VAUBEF0016", "VAUBEF0020", "VAUBEF0025", "VAUBEF0030"]},                           {"BezeichnungDE": ["Sammelbandantrieb", "Sammelbandantrieb", "Befuellpunkt einfach Behaelter + Karton", "Befuellpunkt doppelt", "Befuellpunkt doppelt Karton FZPS415", "Befuellpunkt Seitenshutter 1 Stellung", "Befuellpunkt Leistungsshutter"]},                          {"BezeichnungEN": ["Collectiong belt drive N50", "Concave collecting belt", "", "Filling point double", "", "", ""]},                          {"Einheit": ["STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK"]},                          {"Mat_Grp": ["VAU", "VAU", "VAU", "", "VAU", "VAU", "VAU"]},                          {"Kostenart":[1500, 1500, 1500, 1500, 1500, 1500, 1500]},                          {"Vertriebstext_DE":["*Antrieb, Umlenkungen, Spanner* Elektrische Absicherung + Verkabelung* Elektro-Insallationsmaterial anteilig", "Stück Permatantrieb Mudenband N500 v1,5 EkWbeinhaltet:*Antrieb*Spanner; Umleknteill䮧en inkl. Gel䮤er* Kabelrinne Schaltschrank bis Motor* Starterkombination*", "", "Beinhaltet folgende Teile: *Befüllrcihter, Seitenshutter, Rüttelklemmung, Abdeckungen, FZPS Lichtgitter", "NULL", "NULL", "NULL"]},                          {"Vertriebstext_EN":["*Drive, deflections", "Stück Permaantrieb", "", "Includes: *funnel, slide", "", "", ""]},                          {"Stueckliste":["VAUBEF0010", "VAUBEF0011", "VAUBEF0015", "VAUBEF0016", "VAUBEF0020", "VAUBEF0025", "VAUBEF0030"]},                          {"Status": ["F", "F", "G", "F", "G", "G", "G"]},                          {"Klasse": ["VTPIMV", "VTPIMV", "VTPIMV", "VTPIMV", "VTPIMV", "VTPIMV", "VTPIMV"]},                          {"Mantelflaeche":[1, 1, 12, 12, 12, 0.5, 0.5]},                          {"Gewicht":[120, 120, 500, 500, 500, 20, 30]},                          {"KlasseID":["1.2.6.4", "1.2.6.4", "", "2.1.6", "", "", ""]}],      "Stueckliste": [{"Verkaufsartikel":["VAUBEF0010", "VAUBEF0010", "VAUBEF0010", "VAUBEF0011", "VAUBEF0011", "VAUBEF0011", "VAUBEF0015", "VAUBEF0015", "VAUBEF0015", "VAUBEF0016", "VAUBEF0016", "VAUBEF0016", "VAUBEF0020", "VAUBEF0020", "VAUBEF0020", "VAUBEF0025", "VAUBEF0025", "VAUBEF0025", "VAUBEF0030", "VAUBEF0030", "VAUBEF0030"]},                      {"Position":[10, 20, 30, 10, 20, 30, 10, 20, 30, 10, 20, 25, 10, 20, 30, 10, 20, 30, 10, 20, 30]},                      {"PosArtikel":["Z0306251", "Z0072937", "Z0072900", "Z0306251", "Z0072937", "Z0072900", "Z0073240", "Z08636568", "Z0073560", "Z0926005", "Z0907896", "Z0945783", "Z0296202", "Z0073328", "Z0073560", "Z0073240", "Z0175446",                                      "Z0175752", "Z0073240", "Z0175448", "Z0073455"]},                      {"PosBezeichnung":["VEL Elektro- Montagematerial anteilig pr", "VEL Kabelrinnenmaterial anteilig pro Ant", "Versorgung Elektrik 60m 4G1,5 Kabel", "VEL Elektro-MOntagematerial anteilig pr", "VEL Kabelrinnenmaterial anteilig pro Ant", "Versorgung Elektrik 60m 4G1,5 Kabel", "Aktor Ventil Kabel", "CANIO 16-32 Fix", "GEhause Befuellungspunkt-Frequenzumrichter", "VEL Befuellpunkt doppelt", "Befuellpunkt Schaltschrank mit PCX, FU", "Bedienpult PCX","Aktor Ventil-Kabel gewichtiet", "CANIO 16-8 Wahl PCX", "Gehaeuse Befuellpunkt Frequenzumrichter", "Aktor Ventil-Kabel", "Halterung fuer GL6-Lichtschranke", "Kabel m12 5m", "Aktor Ventil-Kabel", "Kabel M8 5m", "Sensor-Reedkontakt"]},                      {"PosKostenart":[9105, 9105, 9105, 9105, 9105, 9105, 9111, 9103, 9106, 9106, 9106, 9103, 9111, 9101, 9106, 9111, 9110, 9111, 9111, 9111, 9102]},                      {"Datum":["2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31", "2022-01-31"]},                      {"Material":[60.41, 160.28, 38.68, 60.41, 160,28, 38.68, 12.36, 105.31, 665.99, 3965.23, 3489.32, 1317.19, 41.2, 323.2, 665.99, 4.12, 28.64, 3.68, 8.24, 5.54, 74.06]},                      {"GMK":[3.63, 9.62, 2.32, 3.63, 9.62, 2.32, 0.75, 6.32, 39.94, 237.85, 209.37, 79.04, 2.5, 19.4, 39.94, 0.25, 1.76, 0.22, 0.5, 0.34, 4.44]},                      {"Lohn":[2.06, 0, 0, 2.06, 0, 0, 0, 19.39 ,229.28, 149.02, 1322.23, 89.21, 0, 32.2, 229.28, 0, 18.32, 0, 0, 0, 0]},                      {"Menge":[1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 10, 2, 1, 1, 4, 1, 2, 2, 2]},                      {"Mengeneinheit":["STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK", "STK"]}],      "Arbeitsgänge": [{"Verkaufsartikel":["VAUBEF0010", "VAUBEF0010", "VAUBEF0010", "VAUBEF0011", "VAUBEF0011", "VAUBEF0011", "VAUBEF0015", "VAUBEF0015", "VAUBEF0015", "VAUBEF0016", "VAUBEF0016", "VAUBEF0016", "VAUBEF0020",                                          "VAUBEF0020", "VAUBEF0020", "VAUBEF0025", "VAUBEF0025", "VAUBEF0025", "VAUBEF0030", "VAUBEF0030", "VAUBEF0030"]},                      {"AGNR":[10, 20, 6, 10, 20, 6, 10, 100, 20, 10, 100, 110, 10, 100, 20, 10, 100, 30, 10, 100, 30]},                      {"Bereich":["Mechanische Montage", "Elektrische Montage", "TF - Mechanik", "TF - Elektrik", "Mechanische Montage", "Elektrische Montage", "Elektrische Montage", "Mechanische Montage", "Systembau Stationsabnahme QS",                                  "Elektrische Montage", "Mechanische Montage", "Stationsabnahme Stationsabnahme QS", "Systembau Verpacken", "Mechanische Montage", "Systembau Stationsabnahme QS", "Elektrische Montage", "Systembau Mechanik Stationsaufbau",                                  "Mechanische Montage", "Systembau Mechanik Stationsaufbau", "Systembau Stationsabnahme QS", "Mechanische Montage"]},                      {"Lohn":[89.1, 160.38, 168, 106.92, 160.38, 168, 320.76, 9.8583, 213.84, 427.79988, 21.1, 100, 427.68, 19.71665, 213.84, 61.5, 2.75, 16.2, 25.625, 2.766665, 16.2]},                      {"Kostenstelle":[523500, 523500, 522000, 523500, 523500, 522000,  523500, 906045, 523500, 523500, 906045, 906045, 523500, 906045, 523500, 906045, 906045, 523500, 906056, 906045, 523500]},                      {"Zeit":[148.5, 267.3, 180, 178.2, 267.3, 180, 534.6, 11.83, 356.4, 713, 25.32, 120, 712.8, 23.66, 356.4, 73.8, 3.3, 27, 30.75, 3.32, 27]},                      {"ARBPLATZ":["K950M", "K950E", "K420M", "K950M", "K950E", "K420M", "K950M", "683SAB", "K950E", "K950M", "683SAB", "683V", "K950M", "683SAB", "K950E", "683", "683SAB", "K950M", "683", "683SAB", "K950M"]}]  }  

Spring boot transform code to @Transactional

Posted: 31 Mar 2022 07:37 AM PDT

im new Spring. I trying to create my application, but have problems with design Repository class. In my realization i opening and closing session in every Repository methods. And in delete method i initiate transaction, because without this, the changes are not reflected in the database (H2) data structure. I don't like this realization and trying to transform it to @Transaction, but it not work.

Here is my code:

// Repository

@Repository("SampleRepository")  public class SampleRepository implements SampleDao {        private static final Log log = LogFactory.getLog(SampleRepository.class);        @Autowired      private SessionFactory sessionFactory;        private Session session;        @Override      public void save(SampleEntity sampleEntity) {          Session session = sessionFactory.openSession();          Serializable insertedId = session.save(sampleEntity);          log.info("Added new row to Sample table with id: " + insertedId.toString());          session.close();      }        @Override      public void delete(SampleEntity sampleEntity) {          Session session = sessionFactory.openSession();          session.beginTransaction();          session.delete(sampleEntity);          session.getTransaction().commit();          log.info("Deleted row " + sampleEntity.getId() + " from Sample table");          session.close();      }        @Override      public List<SampleEntity> findAll() {          Session session = sessionFactory.openSession();          CriteriaBuilder criteriaBuilder = session.getCriteriaBuilder();          CriteriaQuery<SampleEntity> criteriaQuery =           criteriaBuilder.createQuery(SampleEntity.class);          Root<SampleEntity> rootEntry = criteriaQuery.from(SampleEntity.class);          CriteriaQuery<SampleEntity> all = criteriaQuery.select(rootEntry);            TypedQuery<SampleEntity> allQuery = session.createQuery(all);          List<SampleEntity> resultList = allQuery.getResultList();          session.close();          return resultList;      }    }  

// Sample DAO

public interface SampleDao {      List<SampleEntity> findAll();      void save(SampleEntity sampleEntity);      void delete(SampleEntity sampleEntity);  }  

// Entity

@Entity  @Table(name = "Sample")  public class SampleEntity implements Serializable {        @Id      @GeneratedValue(strategy = GenerationType.IDENTITY)      private long id;        @Column(name = "test1")      private String test1;        @Column(name = "test2")      private String test2;        @Column(name = "test3")      private String test3;        public void setTest1(String text) { this.test1 = text; }        public void setTest1(String text) { this.test2 = text; }        public void setTest1(String text) { this.test3 = text; }    }  

// Component

@Component  public class SampleComponent {            @Autowired      private SampleRepository sampleRepository;        private SampleEntity sampleEntity;         public void addFindAndDeleteSample() {           sampleEntity = new SampleEntity();         sampleEntity.setTest1("Test 1");         sampleEntity.setTest2("Test 2");         sampleEntity.setTest3("Test 3");           sampleRepository.save(sampleEntity);           List<SampleEntity> sampleEntityList = sampleRepository.findAll();           for (SampleEntity sampleEntity : sampleEntityList) {            try {              sampleRepository.delete(sampleEntity);            } catch (Exception ex) {              System.out.println(ex.getMessage());            }         }      }  }  

Question is: how to design a repository so that it works through transactions and in each method it would not be necessary to open and close a session?

How to create a set of map in Cassandra?

Posted: 31 Mar 2022 07:38 AM PDT

I am trying to have a data that will output class and teachers as below:

CLASS A: John  CLASS B: Jane  CLASS C: Stan  

This is my CQLSH:

CREATE TABLE subjects(      name TEXT,      class_and_teachers SET<FROZEN MAP<TEXT, TEXT>>,      );  

However, I got the error

no viable alternative at input 'MAP' (...   name TEXT,  class_and_teachers SET<[FROZEN] MAP...)  

Is there something wrong?

VBA Outlook Email update current/selected field

Posted: 31 Mar 2022 07:38 AM PDT

I wrote a macro, where the starting time of a meeting should be entered into the "Subject" Field of a meeting and the mail will be automaticly send right after. My problem is, when i start the macro through a button and the last selected field like Subject or Start Time is selected and changed, the email will be send, but with the old data.

Example below

For example: My last input to the email was that i entered in the empty Subject field the text "Test". After that i send the email, through the button. The email is sent, but the subject field remains empty. So i tried to use commands like update, SendKeys "{TAB}", ..., but the subject remains empty. Is there a way to update fields like subject and starttime at the beginning of the macro, so I get the changed/new data into the subject or the new/changed starttime.

i tried to use commands like update, SendKeys "{TAB}", myItem.Close olDiscard to close and update the field and send it right after.

Code:

Sub startTimeSend()      On Error GoTo HandleErr                        Dim myItem As Object          Set myItem = Application.ActiveInspector.CurrentItem          Dim oldTitle As String          Dim startTime As String          Dim scanForOldNr As String          Dim newStartTimeFormat As String            '       olPromptForSave  '        SendKeys "{ENTER}"  '        SendKeys "{ENTER}", True  '        Application.SendKeys ("{ENTER}")            oldTitle = myItem.Subject          startTime = myItem.Start    '        MsgBox (oldTitle)              '       scanForOldNr contains third char (usually ":")          scanForOldNr = Mid(oldTitle, 3, 1)          If scanForOldNr Like "*:*" Then  '       7 da es von 1 hochzählt nicht null  '            MsgBox (scanForOldNr)              oldTitle = Mid(oldTitle, 7)          End If    '        Cancel = True                    newStartTimeFormat = Format(startTime, "hh:mm")          myItem.Subject = newStartTimeFormat & " " & oldTitle            myItem.Send    ExitHere:              Exit Sub      HandleErr:  '        Cancel = False            Resume ExitHere  End Sub  

How fix Azure web app deploy error with VSCode?

Posted: 31 Mar 2022 07:38 AM PDT

I followed quickstart-nodejs guide. Difference is region. guide region is 'eu', my region is 'kor'.

https://docs.microsoft.com/ko-kr/azure/app-service/quickstart-nodejs?tabs=windows&pivots=development-environment-vscode

Error came when after zipping.

오후 3:46:22: Error: request to https://mpexpressapp011.scm.azurewebsites.net/api/zipdeploy?isAsync=true&author=VS%20Code failed, reason: read ECONNRESET

Why and How?

Please help me. (ㅠㅠ)

How can I change the index correct in a list? (Python)

Posted: 31 Mar 2022 07:38 AM PDT

I wrote some code to calculate the maximum path in a triangle.

     75     95 64    17 47 82   18 35 87 10  20 04 82 47 65  

The maximum path sum is 75+95+82+87+82. I want that it calculates the maximum path from the adjecent numbers under the current layer. For example the maximum path sum must be: 75+95+47+87+82, because 47 'touches' 95 and 82doesn't. In the layer under this there is a choice between 35 and 87. So there is always a choice between two numbers. Does anyone how I can do this? This is my code:

lst = [[72],       [95,64],       [17,47,82],       [18,35,87,10],      [20,4,82,47,65]]  something = 1  i = 0  mid = 0  while something != 0:      for x in lst:          new = max(lst[i])          print(new)          i += 1          mid += new      something = 0  print(mid)  

Why NetBeans can't find Mercurial path?

Posted: 31 Mar 2022 07:38 AM PDT

I have a Linux Mint VM, where I installed formerly NetBeans 12.x (up to 12.6) and now I've updated it to 13.0, all with flatpak, and even starting from a clean setup.

With all those setups, NetBeans can't find Mercurial, even if I really have it in /usr/bin/hg available and working (when used from the shell).

From menu Team / Mercurial / Initialize repository... I get the error "Mercurial could not be found", asking to check PATH. The strange behavior is that if I even browse for Options and Mercurial Executable Path to /usr/bin I can't see hg in there, while it is in the filesystem!

Is NB browsing somewhere else when I open /usr/bin? Is it accessing some virtual environment? I'm confused

In PYTHON How can i manipulate index data using for loop with the same lists next indices. Solution/Code required [closed]

Posted: 31 Mar 2022 07:38 AM PDT

I have a list as below onelist

[ {'eventId': '1ef', 'eventType': 'dr', 'partnerId': 'yyyy', 'customerId': '6fxxx'},

{'eventId': 'a0f', 'eventType': 'cr', 'partnerId': '819e', 'customerId': '5eeaa'},

{'eventId': '900', 'eventType': 'er', 'partnerId': '819f', 'customerId': '5eenk'},

{'eventId': 'be4', 'eventType': 'dr', 'partnerId': '819f', 'customerId': '6fxxx'},

{'eventId': '0da', 'eventType': 'dr', 'partnerId': 'yyyy', 'customerId': '6fxxx'},

{'eventId': '8e0', 'eventType': 'cr', 'partnerId': '819e', 'customerId': '5eeaa'} ]

I want a new list as below from the previous list. Any simple way exist or using for loop can be achieved ? Please give the code to me in Python.

newonelist [

{'eventId': ['1ef',0da], 'eventType': 'dr', 'partnerId': 'yyyy', 'customerId': '6fxxx'},

{'eventId': ['aof','8e0'], 'eventType': 'cr', 'partnerId': '819e', 'customerId': '5eeaa'}

{'eventId': '900', 'eventType': 'er', 'partnerId': '819f', 'customerId': '5eenk'},

{'eventId': '0da', 'eventType': 'dr', 'partnerId': 'yyyy', 'customerId': '6fxxx'} ]

Can you please give the PROGRAM in Python language to me .

Task is: if any records are duplicate (means If eventType, partnerId and customerId value same/common in the next indices also than append the eventIds and display the record, redundant record is not required.) For the common/duplicate records append the eventid in the list and show.

-> Here eventIdList is always unique like primarykey. -> And (eventTypeList, partnerIdList, customerIdList) may contain duplicate values.

How can i append and display eventId as below in a for loop. Here 0th and 4th index is duplicate records. {'eventId': ['1ef',0da], 'eventType': 'dr', 'partnerId': 'yyyy', 'customerId': '6fxxx'}, and 1stindex and 5thindex record is same {'eventId': ['aof','8e0'], 'eventType': 'cr', 'partnerId': '819e', 'customerId': '5eeaa'} Hope you understand my question.

Bellus3D is being end-of-lifed, is there any replacement iOS Solution for 3D face scanning?

Posted: 31 Mar 2022 07:38 AM PDT

I work on an application for custom fit eyewear, and we've been using Bellus3D's iOS SDK for getting facial geometry, including landmarks like pupils.

Bellus3D has decided to wind their business down by the end of 2022, and I'm looking for a suitable replacement framework for our application. Bellus was great because it produced reliable results in exchange for a pretty simple user experience.

I've found a few apps that also use or used Bellus, but not getting any word about what alternatives they've found that would suitably replace it.

  • Scandy doesn't seem to be accepting new SDK registrations
  • Standard Cyborg took some tweaks, but works great, and their API tokens work, but I can't find any information about their pricing and they're not responding
  • Topology Eyewear seems to have a solution, but not a lot of details and aren't responding either.

I've reached out to a few app developers that incorporated Bellus 3D, but so far all I've heard is that they're in the same situation.

Does anyone know of a working, maintained solution for 3D face scanning with cell phones (or 3D scanning in general), or of an approach to get something with decent fidelity out of ARKit

How to convert column values to multiple columns in a oracle table data

Posted: 31 Mar 2022 07:37 AM PDT

iam trying to convert coumn to row in SQL. the below is my data.

input:  -----  100  101  103  

i want to populate output below. could you please help me out. output:

id1  id2  id3  ---------------  100  101  103  

how to use pivot for above

Cypress with Azure AD (MSAL)

Posted: 31 Mar 2022 07:38 AM PDT

I'm new to both Cypress and Azure AD, but I've been following the steps described here to create Cypress tests on an existing Angular app that uses Azure AD. It mentions that they are using ADAL, but our app uses MSAL, which it says should be similar. However, I'm struggling to get it to work. Here's my login function so far:

const tenant = 'https://login.microsoftonline.com/{my_tenant_id}/';  const tenantUrl = `${tenant}oauth2/token`;  const clientId = '{my_app_id}';  const clientSecret = '{my_secret}';  const azureResource = 'api://{my_app_id}';  const knownClientApplicationId = '{client_application_id_from_manifest}';  const userId = '{user_identifier}';        export function login() {      cy.request({          method: 'POST',          url: tenantUrl,          form: true,          body: {              grant_type: 'client_credentials',              client_id: clientId,              client_secret: clientSecret,              resource: azureResource          }      }).then(response => {          const Token = response.body.access_token;          const ExpiresOn = response.body.expires_on;          const key = `{"authority":"${tenant}","clientId":"${knownClientApplicationId}","scopes":${knownClientApplicationId},"userIdentifier":${userId}}`;          const authInfo = `{"accessToken":"${Token}","idToken":"${Token}","expiresIn":${ExpiresOn}}`;                window.localStorage.setItem(`msal.idtoken`, Token);          window.localStorage.setItem(key, authInfo);  }  Cypress.Commands.add('login', login);  

When I run this, an access token is returned. When I examine the local storage after a normal browser request, it has many more fields, such as msal.client.info (the authInfo value in the code above should also contain this value), but I've no idea where to get this information from.

The end result is that the POST request seems to return successfully, but the Cypress tests still consider the user to be unauthenticated.

The existing app implements a CanActivate service that passes if MsalService.getUser() returns a valid user. How can I convince this service that my Cypress user is valid?

Update:

After some experimentation with the local storage values, it looks like only two values are required to get past the login:

msal.idtoken  msal.client.info  

The first I already have; the second one I'm not sure about, but it appears to return the same value every time. For now, I'm hard coding that value into my tests, and it seems to work somewhat:

then(response => {      const Token = response.body.access_token;        window.localStorage.setItem(`msal.idtoken`, Token);      window.localStorage.setItem(`msal.client.info`, `{my_hard_coded_value}`);  });  

The only minor issue now is that the MsalService.getUser() method returns slightly different values than the app is expecting (e.g. displayableId and name are missing; idToken.azp and idToken.azpacr are new). I'll investigate further...

SOLID Design Principles : Liskov Substitution Principle and Dependency Inversion Principle

Posted: 31 Mar 2022 07:38 AM PDT

Just a thought and a question to the Stack Overflow and Microsoft Development Community about the OO software design principles called SOLID. What is the difference between the Liskov Substitution Principle and the Dependency Inversion Principle please ? I have thought about this for a while and I'm not sure of the difference. Please could you let me know ? Any thoughts / feedback very welcome.

TFS error: "the project file or web could not be found"

Posted: 31 Mar 2022 07:37 AM PDT

I've been dealing with this issue for weeks now but until today was unable to solve it. I have a solution with 5 projects in it. It downloads them just fine except for one. I could not figure out why... I get the error:

"the project file or web could not be found."

Python/Pandas Dataframe replace 0 with median value

Posted: 31 Mar 2022 07:37 AM PDT

I have a python pandas dataframe with several columns and one column has 0 values. I want to replace the 0 values with the median or mean of this column.

data is my dataframe
artist_hotness is the column

mean_artist_hotness = data['artist_hotness'].dropna().mean()    if len(data.artist_hotness[ data.artist_hotness.isnull() ]) > 0:  data.artist_hotness.loc[ (data.artist_hotness.isnull()), 'artist_hotness'] = mean_artist_hotness  

I tried this, but it is not working.

Sidekiq Unique Job Processing

Posted: 31 Mar 2022 07:38 AM PDT

I need to assure that no more than one job per user_id is worked simultaneosly by any of the 25 workers to avoid deadlocks.

I have tried sidekiq unique jobs but deadlocks keep occoring because it keeps trying to process all pending jobs on the queue without looking for the user_id on the params.

Thank you

class MyWork       include Sidekiq::Worker       sidekiq_options :queue => "critical", unique: true,                     unique_args: ->(args) { [ args.user_id ] }        def perform(work_params)  

Get the email address of the current user in Outlook 2007

Posted: 31 Mar 2022 07:38 AM PDT

I have an Outlook add in written in C#.

I was wondering how or if I could get the email address of the current user?

Thanks