InvalidArgumentError while running a session in tensorflow Posted: 30 Sep 2021 08:01 AM PDT I use this a3c blog's post to learn an agent. it uses neural network to optimize performance. but when reaches to the following code it gets an error. In-fact it says that there is an incompatible input data shape and with placeholders.But I tried lots of different shaping and considered reshaping both too. but still I get error when running sess.run() part.How should I do in order to fix it?: InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_21' with dtype float and shape [?,2] [[node Placeholder_21 (defined at <ipython-input-462-3c1b764fbd4e>:3) ]] When printing input data which are in batches i see: print("State shape:", batch_states.shape) print("Batch states:",batch_states) print("Batch actions length:",len(batch_actions)) print("Batch Actions:", batch_actions) print("Batch Rewards:", batch_rewards) print("Batch Done:", batch_done) print("Num actions:", n_actions) State shape: (10, 2) Batch states: [[1501.87201108 1501.87201108] [1462.65450863 1462.65450863] [1480.95616876 1480.95616876] [1492.24380743 1492.24380743] [1481.92809598 1481.92809598] [1480.19257102 1480.19257102] [1503.54571786 1503.54571786] [1489.38563414 1489.38563414] [1541.16797527 1541.16797527] [1516.04036259 1516.04036259]] Batch actions length: 10 Batch Actions: [[1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.] [1. 0. 1. 0. 0.]] Batch Rewards: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Batch Done: [False False False False False False False False False False] Num actions: 5 Here is the part of code which I get error from it: states_ph = tf.placeholder('float32', [None,] + list(obs_shape)) next_states_ph = tf.placeholder('float32', [None,] + list(obs_shape)) actions_ph = tf.placeholder('int32', (None,n_actions)) rewards_ph = tf.placeholder('float32', (None,)) is_done_ph = tf.placeholder('float32', (None,)) # logits[n_envs, n_actions] and state_values[n_envs, n_actions] logits, state_values = agent.symbolic_step(states_ph) next_logits, next_state_values = agent.symbolic_step(next_states_ph) # There is no next state if the episode is done! next_state_values = next_state_values * (1 - is_done_ph) # probabilities and log-probabilities for all actions probs = tf.nn.softmax(logits, axis=-1) # [n_envs, n_actions] logprobs = tf.nn.log_softmax(logits, axis=-1) # [n_envs, n_actions] # log-probabilities only for agent's chosen actions logp_actions = tf.reduce_sum(logprobs * tf.one_hot(actions_ph, n_actions), axis=-1) # [n_envs,] # Compute advantage using rewards_ph, state_values and next_state_values. gamma = 0.99 advantage = rewards_ph + gamma * (next_state_values - state_values) assert advantage.shape.ndims == 1, "please compute advantage for each sample, vector of shape [n_envs,]" # Compute policy entropy given logits_seq. Mind the "-" sign! entropy = - tf.reduce_sum(probs * logprobs, 1) assert entropy.shape.ndims == 1, "please compute pointwise entropy vector of shape [n_envs,] " # Compute target state values using temporal difference formula. Use rewards_ph and next_step_values target_state_values = rewards_ph + gamma*next_state_values actor_loss = -tf.reduce_mean(logp_actions * tf.stop_gradient(advantage), axis=0) - 0.001 * tf.reduce_mean(entropy, axis=0) critic_loss = tf.reduce_mean((state_values - tf.stop_gradient(target_state_values))**2, axis=0) train_step = tf.train.AdamOptimizer(1e-4).minimize(actor_loss + critic_loss) sess.run(tf.global_variables_initializer()) l_act, l_crit, adv, ent = sess.run([actor_loss, critic_loss, advantage, entropy], feed_dict = { states_ph: batch_states, actions_ph: batch_actions, next_states_ph: batch_states, rewards_ph: batch_rewards, is_done_ph: batch_done, }) |
Group by Sub Query? Posted: 30 Sep 2021 08:01 AM PDT I have 3 tables (schema below) and I'm trying to determine how many customers have a flag per category. When I try and use a sub query it just keeps counting how many people have a flag regardless of the category. I've tried adding the group by in the sub query but that doesn't seem to help. I'm sure this is more simple than I'm making it. SELECT p.category, (SELECT count(c.customer_id) from Customer c where c.flag = 'y') as Flagged from Product p LEFT JOIN `Orders` o ON p.product_id = o.product_id LEFT JOIN `Customer` c ON o.customer_id = c.customer_id group by category; **Schema SQL** CREATE TABLE IF NOT EXISTS `Customer` ( `customer_id` varchar(6), `name` varchar(6), `flag` varchar(1), PRIMARY KEY (`customer_id`) ) DEFAULT CHARSET=utf8; INSERT INTO Customer VALUES ('000001', 'matt', 'n'); INSERT INTO Customer VALUES ('000002', 'julia', 'y'); INSERT INTO Customer VALUES ('000003', 'carol', 'n'); INSERT INTO Customer VALUES ('000004', 'Riggs', 'n'); CREATE TABLE IF NOT EXISTS `Orders` ( `order_id` varchar(3), `order_item_id` varchar(3), `customer_id` varchar(6), `date` date, `product_id` varchar(4), PRIMARY KEY (`customer_id`,`product_id`) ) DEFAULT CHARSET=utf8; INSERT INTO Orders VALUES ('aaa', 'xxx', '000001','1.1.2018',"prd1"); INSERT INTO Orders VALUES ('bbb', 'yyy', '000001','1.1.2018',"prd2"); INSERT INTO Orders VALUES ('ccc', 'zzz', '000002','1.1.2018',"prd3"); INSERT INTO Orders VALUES ('ddd', 'www', '000003','1.1.2018',"prd4"); CREATE TABLE IF NOT EXISTS `Product` ( `product_id` varchar(4), `name` varchar(6), `category` varchar(7), PRIMARY KEY (`product_id`) ) DEFAULT CHARSET=utf8; INSERT INTO Product VALUES ('prd1', 'nam1', 'Speaker'); INSERT INTO Product VALUES ('prd2', 'nam2', 'Speaker'); INSERT INTO Product VALUES ('prd3', 'nam3', 'Phone'); INSERT INTO Product VALUES ('prd4', 'nam4', 'Phone'); |
Reactjs - Share const between components Posted: 30 Sep 2021 08:01 AM PDT I am new to reactjs and I am having a bad time trying to share a value I got from a blockchain network after I connect my wallet on my react website. The fact it is loading an other component after the connection, and I would like to have the value I got from the component of connection to the new component. You can have a look here, the connection component in navbar.js where I need to send wallet to my second component in page2.js : import { useEffect, useState } from 'react'; import { useHistory } from "react-router-dom"; import { Container, Image, Navbar, Nav } from 'react-bootstrap'; import { Link, scroller } from "react-scroll"; const Navigation = () => { const history = useHistory(); const [ wallet, setWallet ] = useState(""); const [ navBarShrink, setNavbarShrink ] = useState(false); const connect = async () => { const provider = await web3Modal.connect(); const web3 = getWeb3(provider); const accounts = await web3.eth.getAccounts(); const address = accounts[0]; const networkId = await web3.eth.net.getId(); const chainId = await web3.eth.chainId(); const newAccounts = await Promise.all(accounts.map(async (address: string) => { const balance = await web3.eth.getBalance(address); const tokenBalances = await Promise.all(tokenAddresses.map(async (token) => { const tokenInst = new web3.eth.Contract(tokenABI, token.address); const balance = await tokenInst.methods.balanceOf(address).call(); return { token: token.token, balance } })) return { address, balance: web3.utils.fromWei(balance, 'ether'), tokens: tokenBalances } })) setWallet(newAccounts); return ( <Navbar collapseOnSelect expand="lg" className={ !navBarShrink ? ("navbar navbar-expand-lg bg-secondary text-uppercase fixed-top main_nav"): ("navbar navbar-expand-lg bg-secondary text-uppercase fixed-top main_nav navbar-shrink")} expand="lg" sticky="top"> <Container> <Navbar.Brand> Navbar code </Navbar.Collapse> </Container> </Navbar> ); } export default Navigation; And here is page2.js, where I need to show the values of the const 'wallet' import { Container, Image } from "react-bootstrap"; const Page2 = () => { return ( <section className="page-section bg-primary text-blue mb-0"> <Container> <div className="divider-custom divider-light"> <div className="divider-custom-line"></div> <div className="divider-custom-line"></div> </div> <div className="row"> <div className="col-lg-4 mr-auto"><p className="lead">wallet info is : {walletArray}</p></div> </div> </Container> </section> ); } Thank you for your help, I hope you will understand me. |
Memory leaks is maybe a trouble Posted: 30 Sep 2021 08:01 AM PDT Hey i have a program that is supposed to go through an excel file and then for now clos the application and remove it from memory... the thing is that i am using marshal.ReleaseComObject and doing it for all objects i initialize. When i then check the Activity handler on windows there is still a excel application running in the background what am i doing wrong or is it okey to let that application be there in the background. here is my code using System; using System.Collections.Generic; using System.Linq; using System.Runtime.InteropServices; using System.Text; using System.Threading.Tasks; using Excel = Microsoft.Office.Interop.Excel; namespace Data_WebbConverter.Data { class ExcellDataFile { private Excel.Application application; private Excel.Workbook workbook; private Excel.Worksheet worksheet; private string exelWorkBookFile { get; set; } bool isDispossable = false; public ExcellDataFile() { Console.WriteLine(openExcell()); } public string openExcell() { application = new Excel.Application(); if(application != null) { return "Started application"; } else { return "can't find application"; } } public string readAllExcelCells(string path, int sheet) { workbook = application.Workbooks.Open(path); worksheet = workbook.Worksheets[sheet]; workbook.Close(); application.Quit(); Marshal.ReleaseComObject(worksheet); Marshal.ReleaseComObject(workbook); Marshal.ReleaseComObject(application); return "Read all excelcells and quit program"; } private void quitExcel() { workbook.Close(); application.Quit(); if(worksheet != null ) Marshal.ReleaseComObject(worksheet); if(workbook != null) Marshal.ReleaseComObject(workbook); if (application != null) Marshal.ReleaseComObject(workbook); } } } |
Javascript - the days of some months of the calendar doesn't appear in a correct manner Posted: 30 Sep 2021 08:01 AM PDT I'm a beginner programmer - a student actually - and this is my very first calendar I try to recreate using javascript, and the days of some months of the calendar doesn't appear in a correct manner - as on the image attached: [calendar with a bug][1] [1]: https://i.stack.imgur.com/mbTKr.jpg Apart from that - as I want to use this calendar as a part of a booking system - I'm wondering what is the next step for me to make each individual date an active link that - being clicked - takes a user to an individual webpage where to follow with the booking. The actual code goes like this: const date = new Date const renderCalendar = () => { date.setDate(1); const monthDays = document.querySelector('.days'); const lastDay = new Date(date.getFullYear(), date.getMonth() + 1, 0).getDate(); const prevLastDay = new Date(date.getFullYear(), date.getMonth(), 0).getDate(); const firstDayIndex = date.getDay() - 1 const lastDayIndex = new Date(date.getFullYear(), date.getMonth() + 1, 0).getDay(); const nextDays = 7 - lastDayIndex; const months = [ "Enero", "Febrero", "Marzo", "Abril", "Mayo", "Junio", "Julio", "Agosto", "Septiembre", "Octubre", "Noviembre", "Diciembre" ]; document.querySelector(".date h1").innerHTML = months[date.getMonth()]; document.querySelector(".date p").innerHTML = new Date().toLocaleDateString(); let days = ""; for (let x = firstDayIndex; x > 0; x--) { days += `<div class="prev-date">${prevLastDay - x + 1}</div>`; } for (let i = 1; i <= lastDay; i++) { if (i === new Date().getDate() && date.getMonth() === new Date().getMonth()) { days += `<div class="today">${i}</div>`; } else { days += `<div>${i}</div>`; } } for (let j = 1; j <= nextDays; j++) { days += `<div class="next-date">${j}</div>`; } monthDays.innerHTML = days; } document.querySelector(".prev").addEventListener("click", () => { date.setMonth(date.getMonth() - 1); renderCalendar(); }); document.querySelector(".next").addEventListener("click", () => { date.setMonth(date.getMonth() + 1); renderCalendar(); }); renderCalendar(); |
How does plot() decide where to plot data points on the x-axis? Posted: 30 Sep 2021 08:01 AM PDT I am trying to build a graph using three sets of data: A predicted "umbrella" and an observed "umbrella" where ratios are converted into an integer and then subset into vectors based on a value range. ylrt <- c() yhatlrt <- c() x <- 0:678 y <- 0:640 # i = 0 while (i <= 12000) { ylrt[[i+1]] <- sample(x,2) i = i + 1 } # i = 0 while (i <= 30000) { yhatlrt[[i+1]] <- sample(x,2) i = i + 1 } # ylrt <- unlist(ylrt) yhatlrt <- unlist(yhatlrt) # table(ylrt) # table(yhatlrt) # To determine how often a ratio occurs in a given range, I took the length of the vectors and assigned the values, using as.integer , to unique variables and built a vector out of them. ylrt <- as_tibble(ylrt) OLR1 <- as.integer(length(unlist(subset(ylrt, ylrt <= 25)))) OLR2 <- as.integer(length(unlist(subset(ylrt, ylrt > 25 & ylrt <= 50)))) OLR3 <- as.integer(length(unlist(subset(ylrt, ylrt > 50 & ylrt <= 75)))) OLR4 <- as.integer(length(unlist(subset(ylrt, ylrt > 75 & ylrt <= 100)))) OLR5 <- as.integer(length(unlist(subset(ylrt, ylrt > 200 & ylrt <= 400)))) OLR7 <- as.integer(length(unlist(subset(ylrt, ylrt > 400 & ylrt <= 680)))) OLR <- c(OLR1, OLR2, OLR3, OLR4, OLR5, OLR6, OLR7) # yhatlrt <- as_tibble(yhatlrt) PLR1 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt <= 25)))) PLR2 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt > 25 & yhatlrt <= 50)))) PLR3 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt > 50 & yhatlrt <= 75)))) PLR4 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt > 75 & yhatlrt <= 100)))) PLR5 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt > 100 & yhatlrt <= 200)))) PLR6 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt > 200 & yhatlrt <= 400)))) PLR7 <- as.integer(length(unlist(subset(yhatlrt, yhatlrt > 400 & yhatlrt <= 680)))) PLR <- c(PLR1, PLR2, PLR3, PLR4, PLR5, PLR6, PLR7) # The third data set is being used as a sort of grouping data for the x axis values and contains integers between 1 and 6. IY <- c() # z <- 1:7 # padvar <- length(unlist(ylr)) - length(IY) for (i in 1:padvar) { new_value <- sample(z,2) IY <- c(IY, new_value) } # In order to plot the vectors, I then filled each with NA values until their lengths matched. # padvar <-length(IY) - length(OLR) for (i in 1:padvar) { new_value <- i*NA OLR = c(OLR,new_value) } # padvar1 <-length(IY) - length(PLR) for (i in 1:padvar1) { new_value <- i*NA PLR = c(PLR,new_value) } # Now the context is set, I noticed that upon plotting the data sets, several points are stacked upon the same x value while having the appropriate y axis value, and I have been unable to determine why. How exactly does R determine where to plot the data on the y axis that causes certain values to be stacked in this manner? |
Can a client provide responses to server messages in RSocket? Posted: 30 Sep 2021 08:00 AM PDT I'm exploring RSocket and I'm wondering whether there's a way for the client to respond to streamed messages from the server synchronously... basically to acknowledge or reject the message. I'm using the "Request Stream" approach I can see how channels could be used to do this asynchronously |
find out which triangle is closest to equilateral triangle, fastest/simplest way Posted: 30 Sep 2021 08:00 AM PDT I have tons of triangles defined by three 3d points, like T1 = ((x1, y1, z1), (x2, y2, z2), (x3, y3, z3)) T2 = ((x1, y1, z1), (x2, y2, z2), (x3, y3, z3)) T3 = ((x1, y1, z1), (x2, y2, z2), (x3, y3, z3)) T4 = ((x1, y1, z1), (x2, y2, z2), (x3, y3, z3)).... What I would like to know is, what is the best way to figure out which of these triangle is closest to equilateral triangle. The best strategy I can think of is, somehow calculate three internal angles of each triangle then get product of those three angles, then see which triangle has biggest product. If I'm not mistaking, closer the triangle is to equilateral triangle product of three angles is bigger(when a triangle is perfect equilateral triangle the product of internal angles is 60 times 60 times 60 = 216000) However, I feel like there are much better way to do this task. I would appreciate if you could comes up with better solution for me. None of those sets of three 3d points are in straight line. There are no edges whose length is 0 in these triangles. |
Is it possible to change a machine learning model saved as an rds file? Posted: 30 Sep 2021 08:00 AM PDT I have a machine learning model (Support Vector Machines with Polynomial Kernel) saved as an rds file and would like know if its possible to open it in R as code instead of an object, as I want modify it. (Train on a different data set which have additional inputs) Is this possible? I understand that information about the model can be understood by doing the following and then rewritten with the same parameters but it would be much more convenient if it was possible to get the source code behind the model to then work off. Model <- readRDS("model1.rds") Summary(Model) returns: Length Class Mode 1 ksvm S4 and Model returns Support Vector Machines with Polynomial Kernel 136 samples 22 predictor No pre-processing Resampling: Cross-Validated (10 fold, repeated 50 times) Summary of sample sizes: 124, 122, 123, 123, 122, 122, ... Resampling results across tuning parameters: degree scale C RMSE Rsquared MAE 1 1e-03 0.25 0.9400531 0.26948023 0.7681676 1 1e-03 0.50 0.9278696 0.27118754 0.7559654 1 1e-03 1.00 0.9073649 0.27667363 0.7351476 1 1e-03 2.00 0.8749052 0.29524283 0.7022990 1 1e-03 4.00 0.8432924 0.30690580 0.6695539 1 1e-02 0.25 0.8649211 0.29538671 0.6918704 1 1e-02 0.50 0.8397385 0.29936839 0.6660842 1 1e-02 1.00 0.8254333 0.30113897 0.6502524 1 1e-02 2.00 0.8289587 0.30433998 0.6490126 1 1e-02 4.00 0.8320745 0.31294825 0.6487897 1 1e-01 0.25 0.8296454 0.30747527 0.6486308 1 1e-01 0.50 0.8344243 0.31476033 0.6499654 1 1e-01 1.00 0.8441749 0.31633515 0.6544509 1 1e-01 2.00 0.8535258 0.31456945 0.6595723 1 1e-01 4.00 0.8619060 0.31271215 0.6645062 1 1e+00 0.25 0.8564814 0.31388052 0.6613590 1 1e+00 0.50 0.8637594 0.31220123 0.6653949 1 1e+00 1.00 0.8691346 0.31075428 0.6686576 1 1e+00 2.00 0.8720188 0.31022913 0.6704680 1 1e+00 4.00 0.8736167 0.31006969 0.6715086 1 1e+01 0.25 0.8725931 0.31011668 0.6707497 1 1e+01 0.50 0.8741016 0.30997811 0.6717619 1 1e+01 1.00 0.8745994 0.30999984 0.6720067 1 1e+01 2.00 0.8746420 0.31003087 0.6719952 1 1e+01 4.00 0.8747119 0.31007240 0.6718945 2 1e-03 0.25 0.9278514 0.27113121 0.7559409 2 1e-03 0.50 0.9073266 0.27660246 0.7350886 2 1e-03 1.00 0.8749297 0.29498825 0.7023035 2 1e-03 2.00 0.8434252 0.30648759 0.6696877 2 1e-03 4.00 0.8296849 0.29598473 0.6548526 2 1e-02 0.25 0.8394769 0.29922811 0.6656030 2 1e-02 0.50 0.8225812 0.30647554 0.6471741 2 1e-02 1.00 0.8284856 0.30481597 0.6476189 2 1e-02 2.00 0.8354892 0.30942785 0.6470264 2 1e-02 4.00 0.8654903 0.28874153 0.6675736 2 1e-01 0.25 0.9719262 0.18163582 0.7608615 2 1e-01 0.50 1.0498843 0.14914096 0.8324269 2 1e-01 1.00 1.1538404 0.12192484 0.9208610 2 1e-01 2.00 1.2521386 0.11463212 0.9984787 2 1e-01 4.00 1.3293049 0.10769439 1.0570698 2 1e+00 0.25 1.3523405 0.08190648 1.0703873 2 1e+00 0.50 1.3523405 0.08190648 1.0703873 2 1e+00 1.00 1.3523405 0.08190648 1.0703873 2 1e+00 2.00 1.3523405 0.08190648 1.0703873 2 1e+00 4.00 1.3523405 0.08190648 1.0703873 2 1e+01 0.25 1.4716348 0.06551953 1.1641090 2 1e+01 0.50 1.4716348 0.06551953 1.1641090 2 1e+01 1.00 1.4716348 0.06551953 1.1641090 2 1e+01 2.00 1.4716348 0.06551953 1.1641090 2 1e+01 4.00 1.4716348 0.06551953 1.1641090 3 1e-03 0.25 0.9173351 0.27511200 0.7453143 3 1e-03 0.50 0.8889770 0.28580632 0.7165424 3 1e-03 1.00 0.8544178 0.30218510 0.6806835 3 1e-03 2.00 0.8377603 0.29299541 0.6633006 3 1e-03 4.00 0.8241635 0.30390397 0.6479914 3 1e-02 0.25 0.8296864 0.29943301 0.6543154 3 1e-02 0.50 0.8275125 0.30238935 0.6477723 3 1e-02 1.00 0.8431461 0.29406753 0.6541412 3 1e-02 2.00 0.8834502 0.26293743 0.6838924 3 1e-02 4.00 0.9330564 0.22756081 0.7237417 3 1e-01 0.25 1.0078872 0.15196775 0.7978582 3 1e-01 0.50 1.0508254 0.13985151 0.8353149 3 1e-01 1.00 1.0516959 0.13966326 0.8361822 3 1e-01 2.00 1.0516959 0.13966326 0.8361822 3 1e-01 4.00 1.0516959 0.13966326 0.8361822 3 1e+00 0.25 0.9708866 0.16413958 0.7611715 3 1e+00 0.50 0.9708866 0.16413958 0.7611715 3 1e+00 1.00 0.9708866 0.16413958 0.7611715 3 1e+00 2.00 0.9708866 0.16413958 0.7611715 3 1e+00 4.00 0.9708866 0.16413958 0.7611715 3 1e+01 0.25 0.9793737 0.17539453 0.7659516 3 1e+01 0.50 0.9793737 0.17539453 0.7659516 3 1e+01 1.00 0.9793737 0.17539453 0.7659516 3 1e+01 2.00 0.9793737 0.17539453 0.7659516 3 1e+01 4.00 0.9793737 0.17539453 0.7659516 RMSE was used to select the optimal model using the smallest value. The final values used for the model were degree = 2, scale = 0.01 and C = 0.5. |
Trying to get property 'id' of non-object (View: /Users/zxc/Desktop/ecommerce/resources/views/backend/setting/setting_update.blade.php) Posted: 30 Sep 2021 08:01 AM PDT id; if ($request->file('logo')) { $image = $request->file('logo'); $name_gen = hexdec(uniqid()).'.'.$image->getClientOriginalExtension(); Image::make($image)->resize(139,36)->save('upload/logo/'.$name_gen); $save_url = 'upload/logo/'.$name_gen; SiteSetting::findOrFail($setting_id)->update([ 'phone_one' => $request->phone_one, 'phone_two' => $request->phone_two, 'email' => $request->email, 'company_name' => $request->company_name, 'company_address' => $request->company_address, 'facebook' => $request->facebook, 'twitter' => $request->twitter, 'linkedin' => $request->linkedin, 'youtube' => $request->youtube, 'logo' => $save_url, ]); $notification = array( 'message' => 'Setting Updated with Image Successfully', 'alert-type' => 'info' ); return redirect()->back()->with($notification); }else{ SiteSetting::findOrFail($setting_id)->update([ 'phone_one' => $request->phone_one, 'phone_two' => $request->phone_two, 'email' => $request->email, 'company_name' => $request->company_name, 'company_address' => $request->company_address, 'facebook' => $request->facebook, 'twitter' => $request->twitter, 'linkedin' => $request->linkedin, 'youtube' => $request->youtube, ]); $notification = array( 'message' => 'Setting Updated Successfully', 'alert-type' => 'info' ); return redirect()->back()->with($notification); } // end else } // end method } |
shadow root is undefined [duplicate] Posted: 30 Sep 2021 08:01 AM PDT I have created simple web components using vanilla javascript as per documentation The problem is one of my functions this.shadowRoot is not defined. Here are my functions. var visibleDivId = null; var i, divId, div; console.log('shadowroot', this); // display the shadow root element function divVisibility(divId) { hideNonVisibleDivs(); } function hideNonVisibleDivs(divId) { //I want to access a shadow root here using this console.log('shadowroot', this); //undefined } var panels = this.shadowRoot.querySelectorAll("#tab-info> .share-tab") panels.forEach(function(el) { el.addEventListener('click', function() { divVisibility(this.getAttribute('custom-id')); }); }); Why this is not defined inside my function hideNonVisibleDivs() ? but outside this is displayed as expected. |
A function returns different values in every execution Posted: 30 Sep 2021 08:01 AM PDT I am somewhat new in coding and I encountered a logical error. The goal is to create a function that tests if the number is divisible from 2 to 10. However, as I have tested, the userInput variable returns the right value, but the value of the function does change every execution. Here is the code: #include <stdio.h> int testDivisible(int a) { int checker; // checker is intended for counting divisible numbers between 2-10; if returned > 0, then not divisible for (int i=2; i<=10; i++) { if (a % i == 0) { checker = checker + 1; } } return checker; } int main() { int userInput; printf("Enter number: "); scanf("%d", &userInput); int x = testDivisible(userInput); if (x > 0) { printf("Is divisible by 1 to 10\n"); } else { printf("Is not divisible by 1 to 10\n"); } printf("%d\n", userInput); // intended for testing printf("%d", x); // intended for testing } However, when I compile and run the code, the results are: Execution 1: Enter number: 17 Is divisible by 1 to 10 17 847434400 Execution 2: Enter number: 17 Is not divisible by 1 to 10 17 -1002102112 |
Sum column values if the rows are identical, keep unique rows (Pyspark) Posted: 30 Sep 2021 08:01 AM PDT I have the following Pyspark DF col_a col_b col_c col_d col_f ------------------------------------- val_1 |val_2 |val_3 |val_4 | integer_1 ------------------------------------- val_1 |val_2 |val_3 |val_4 | integer_2 ------------------------------------- val_5 |val_6 |val_7 |val_8 | integer_3 ------------------------------------- val_1 |val_2 |val_3 |val_4 | integer_4 ------------------------------------- The dataframe I'm trying to generate is: col_a col_b col_c col_d col_f ------------------------------------- val_1 |val_2 |val_3 |val_4 | integer_1 + integer_2 + integer_4 ------------------------------------- val_5 |val_6 |val_7 |val_8 | integer_3 ------------------------------------- The goal is to sum up all col_f values if the col_a, col_b, col_c, and col_c are equal, but also to keep other rows that are unique. How can this be achieved? Thank you! |
In python, how do i search a word within a string, ignoring capitals and punctuation, but not words that include the key word? Posted: 30 Sep 2021 08:01 AM PDT #initialize list of indexes index_list=[] for i in range(len(doc_list)): doc_list_lower=doc_list[i].lower() doc_list_lower.replace(',','') doc_list_lower.replace('.','') print(doc_list_lower) for i in range(len(doc_list)): #lower all words in doc_list at index doc_list_lower=doc_list[i].lower() #remove commas and periods from doc_list doc_list_lower.replace(',','') doc_list_lower.replace('.','') #create list of all words in doc_list item doc_words=doc_list_lower.split() #check if word is in document if keyword in doc_words and not(len(keyword)<len(doc_words((doc_words.index(keyword))))): index_list.append(i) #return list print(index_list) trying to search a word within a list of strings. I have to identify the all the items in the list where the word is found. |
Updated Target Frameworkv to 4.6 and this broke the references Posted: 30 Sep 2021 08:01 AM PDT Within my web project I updated the target framework from version 4.5 to 4.6 and now I am getting a runtime error. I dont know why its looking for a System.Runtime, Version=4.0 . Any help would be great. From <TargetFrameworkVersion>v4.5</TargetFrameworkVersion> To <TargetFrameworkVersion>v4.6</TargetFrameworkVersion> web.config Update <httpRuntime targetFramework="4.6" maxRequestLength="1024000" executionTimeout="150000"/> <compilation targetFramework="4.6" debug="true"/> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="0.0.0.0-4.0.0.0" newVersion="4.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.WebPages" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="0.0.0.0-2.0.0.0" newVersion="2.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="Microsoft.Practices.ServiceLocation" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-1.0.0.0" newVersion="1.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="Microsoft.Practices.Unity" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-3.5.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-11.0.0.0" newVersion="12.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.WebPages.Razor" publicKeyToken="31BF3856AD364E35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-2.0.0.0" newVersion="2.0.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Http" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-5.2.3.0" newVersion="5.2.3.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Net.Http.Formatting" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-5.2.3.0" newVersion="5.2.3.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="WebGrease" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-1.5.2.14234" newVersion="1.5.2.14234"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="dotless.Core" publicKeyToken="96b446c9e63eae34" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-1.5.2.0" newVersion="1.5.2.0"/> </dependentAssembly> </assemblyBinding> </runtime> Runtime Error Compiler Error Message: CS0012: The type 'System.IEquatable`1' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. |
No module named myapp Posted: 30 Sep 2021 08:01 AM PDT I have this project directory structure: . .pylintrc |--myproj . |--myapp |--myproj (settings.py is here) __init__.py manage.py . In settings.py, INSTALLED_APPS I have the first entry 'myapp' . From the root folder (containing .pylintrc), I call $ DJANGO_SETTINGS_MODULE=myproj.myproj.settings pylint myproj --load-plugins pylint_django However, I get error no module named 'myapp' . If I change the INSTALLED_APPS entry to 'myproj.myapp' , then it is able to continue, but now I'm unable to start the project normally with manage.py runserver . pastebin myproj.settings What am I doing wrong, and what can I do to proceed? |
expression object in R Posted: 30 Sep 2021 08:01 AM PDT In R, I can calculate the first-order derivative as the following: g=expression(x^3+2*x+1) gPrime = D(g,'x') x = 2 eval(g) But I think it's not very readable. I prefer to do something like this: f = function(x){ x^3+2*x+1 } fPrime = D(g,'x') #This doesn't work fPrime(2) Is that possible? Or is there a more elegant way to do ? |
mysql-connector-python is not working on VS Code and working in python shell Posted: 30 Sep 2021 08:00 AM PDT I am trying to connect to MySQL database, but strangly mysql-connector-python is working in python shell, and not using VS Code. I tried installing the following packages: pip install mysql-connector pip install mysql-connector-python pip install mysql-connector-python-rf No luck either. So, I uninstalled all packages then reinstalled only mysql-connector-python , then rebooted, to make sure there was no issue in this regard. The error is: ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package Python version: 3.9.4 VS Code: 1.6.20 Here are some screenshots: |
How to resolve NewAD error with -Path $OU Posted: 30 Sep 2021 08:01 AM PDT I am currently trying to automate the creation of new users on my Active Directory. However when I run my powershell here is the error that presents itself to me : New-ADUser: Unable to validate argument on "Path" parameter. The argument is null or empty. Provide an argument that is not null or empty and try again. At character Line: 23:19 + -Path $ OR ` + ~~~ + CategoryInfo: InvalidData: (:) [New-ADUser], ParameterBindingValidationException + FullyQualifiedErrorId: ParameterArgumentValidationError, Microsoft.ActiveDirectory.Management.Commands.NewADUser What can i do ? Thanks for your help ! This is my code $ADUsers = Import-csv E:\SCRIPT\newusers.csv foreach ($User in $ADUsers) { $Username = $User.username $Password = $User.password $Firstname = $User.firstname $Lastname = $User.lastname $Description = $User.description $OU = $User.ou New-ADUser ` -SamAccountName $Username ` -UserPrincipalName "$Lastname@domaine.fr" ` -Name "$Firstname $Lastname" ` -GivenName $Firstname ` -Surname $Lastname ` -Enabled $True ` -ChangePasswordAtLogon $False ` -DisplayName "$Lastname, $Firstname" ` -Description $Description ` -AccountPassword $Password ` -Path $OU ` } |
Issue With Elasticsearch Ingest Node Grok Posted: 30 Sep 2021 08:01 AM PDT I'm trying to parse the following message in order to get the duration: "Request finished in 2308.3035ms 200 application/json; charset=utf-8" I tried this, but it returns an exception (the same exception I get for any attempt I make which doesn't work): PUT _ingest/pipeline/get_request_duration { "description": "Get request duration", "processors": [ { "grok": { "field": "message", "patterns": [ "%{GREEDYDATA:message} %{NUMBER:duration:double}ms ${NUMBER:http.status:int} %{WORD:http.content_type}; charset=%{WORD:charset}" ] } } ] } I've tried narrowing down the problem by simplifying the expression to find the problem with no luck. This pattern works, but as soon as I add the status as above it fails: "%{GREEDYDATA:message}%{WORD:http.content_type}; charset=%{WORD:charset}" Any help is appreciated. I've tried this, per Roy's suggestion: PUT _ingest/pipeline/get_request_duration { "description": "Get request duration", "processors": [ { "grok": { "field": "message", "patterns": [ "%{GREEDYDATA:message} %{NUMBER:duration} %{NUMBER:http.status} %{WORD:http.content_type}" ] } } ] } Tried simulating like this: POST _ingest/pipeline/get_request_duration/_simulate { "docs": [ { "_source": { "message": "Request finished in 2308.3035ms 200 application/json; charset=utf-8" } } ] } And got response: { "docs" : [ { "error" : { "root_cause" : [ { "type" : "exception", "reason" : "java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: Provided Grok expressions do not match field value: [Request finished in 2308.3035ms 200 application/json; charset=utf-8]", "header" : { "processor_type" : "grok" } } ], "type" : "exception", "reason" : "java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: Provided Grok expressions do not match field value: [Request finished in 2308.3035ms 200 application/json; charset=utf-8]", "caused_by" : { "type" : "illegal_argument_exception", "reason" : "java.lang.IllegalArgumentException: Provided Grok expressions do not match field value: [Request finished in 2308.3035ms 200 application/json; charset=utf-8]", "caused_by" : { "type" : "illegal_argument_exception", "reason" : "Provided Grok expressions do not match field value: [Request finished in 2308.3035ms 200 application/json; charset=utf-8]" } }, "header" : { "processor_type" : "grok" } } } ] } |
draw a circle inside each rectangle Posted: 30 Sep 2021 08:01 AM PDT I want to draw a circle inside each rectangle but the circle is outside the rectangle here is my code plot.new() plot.window(xlim = c(-1,9),ylim = c(-1,9),asp=1) rect(1,1:8,1+1,9) theta <- seq(0, 2 * pi, length = 73)[1:72] x=cos(theta) y=sin(theta) polygon(x,y) |
Dividing a 2D plane into areas that belong together Posted: 30 Sep 2021 08:01 AM PDT I have multiple polygons where some are of different color and some of equal color. I want to group them into areas (e.g. new polygons) that completely contain all polygons of the same color. See the two simple examples which would both satisfy these conditions. The dotted red lines are the desired result. The first example divides the whole plane, the second does not. I don't care as long as all polygons of same color are grouped. It can be assumed that a solution exists, i.e. there will not be a polygon of blue color fully enclosed by one of black color. Also polygons do not intersect but may share a border like in the example. However, edge cases like this could occur: I'm looking for an algorithm that can accomplish this. The first example reminded my of Voronoi diagrams, but it's different because I have polygons not individual points. A real world example of this would be to divide a city into districts base on housing blocks. |
Get the all document's by year mongoDB Posted: 30 Sep 2021 08:01 AM PDT Example of one document: { "_id":{ "quarter":{ "quarter":4, "year":2009 }, "ticker":"MMM" }, "data":{ "C1":2.4676424337728315, "C10":-1.7952609553649523, "C11":-0.024843006011523616, "C12":19.3099694596474 } } So I want to get all the documents equal to year 2009 by using pymongo. Thank you |
Azure table backup retrieve more than 1000 rows Posted: 30 Sep 2021 08:01 AM PDT I hope somebody can help me to debug this issue. I have the following script from azure.cosmosdb.table.tableservice import TableService,ListGenerator from azure.storage.blob import BlobServiceClient from datetime import date from datetime import * def queryAndSaveAllDataBySize(tb_name,resp_data:ListGenerator ,table_out:TableService,table_in:TableService,query_size:int): for item in resp_data: #remove etag and Timestamp appended by table service del item.etag del item.Timestamp print("instet data:" + str(item) + "into table:"+ tb_name) table_in.insert_or_replace_entity(tb_name,item) if resp_data.next_marker: data = table_out.query_entities(table_name=tb_name,num_results=query_size,marker=resp_data.next_marker) queryAndSaveAllDataBySize(tb_name,data,table_out,table_in,query_size) tbs_out = table_service_out.list_tables() for tb in tbs_out: #create table with same name in storage2 table_service_in.create_table(table_name=tb.name, fail_on_exist=False) #first query data = table_service_out.query_entities(tb.name,num_results=query_size) queryAndSaveAllDataBySize(tb.name,data,table_service_out,table_service_in,query_size) this code will check the table in storageA copy them and create the same table in StorageB , and thanks to the marker I can have the x_ms_continuation token if I have more than 1000 rows per requests. Goes without saying that this works just fine as it is. But yesterday I was trying to make some changes to the code as follow: If in storageA I have a table name TEST, I storageB I want to create a table named TEST20210930, basically the table name from storageA + today date This is where the code start breaking down. table_service_out = TableService(account_name='', account_key='') table_service_in = TableService(account_name='', account_key='') query_size = 100 #save data to storage2 and check if there is lefted data in current table,if yes recurrence def queryAndSaveAllDataBySize(tb_name,resp_data:ListGenerator ,table_out:TableService,table_in:TableService,query_size:int): for item in resp_data: #remove etag and Timestamp appended by table service del item.etag del item.Timestamp print("instet data:" + str(item) + "into table:"+ tb_name) table_in.insert_or_replace_entity(tb_name,item) if resp_data.next_marker: data = table_out.query_entities(table_name=tb_name,num_results=query_size,marker=resp_data.next_marker) queryAndSaveAllDataBySize(tb_name,data,table_out,table_in,query_size) tbs_out = table_service_out.list_tables() print(tbs_out) for tb in tbs_out: table = tb.name + today print(target_connection_string) #create table with same name in storage2 table_service_in.create_table(table_name=table, fail_on_exist=False) #first query data = table_service_out.query_entities(tb.name,num_results=query_size) queryAndSaveAllDataBySize(table,data,table_service_out,table_service_in,query_size) What happens here is that the code runs up to the query_size limit but than fails saying that the table was not found. I am a bit confused here and maybe somebody can help to spot my error. Please if you need more info just ask Thank you so so so much. HOW TO REPRODUCE: In azure portal create 2 storage account. StorageA and StorageB. In storage A create a table and fill it with data, over 100 (based on the query_size. Set the configuration Endpoints. table_service_out = storageA and table_storage_in = StorageB |
Find most recent date in multiple data.frame objects of unequal sizes in R Posted: 30 Sep 2021 08:01 AM PDT I have multiple data.frame objects of unequal lengths. I would like to find the most recent date in all of them and store the data somewhere. Here is an example of hopefully reproducible code to illustrate what I would like (with comments and sources). This gives 7 data.frame objects of variable lengths: library(quantmod) # Load ticker data from 2020-01-01 till 2021-02-02 tickers <- c("NKLA", "MPNGF", "RMO", "JD", "COIN") getSymbols.yahoo(tickers, auto.assign = TRUE, env = globalenv(), from = "2020-01-01", to = "2021-02-02") # Load ticker data from 2020-01-01 till yesterday (if not weekend or holiday) tickers2 <- c("IBM", "AAPL", "MRNA") getSymbols.yahoo(tickers2, auto.assign = TRUE, env = globalenv(), from = "2020-01-01") # Close all Internet connections as a precaution # https://stackoverflow.com/a/52758758/2950721 closeAllConnections() # Find xts objects xtsObjects <- names(which(unlist(eapply(.GlobalEnv, is.xts)))) # Convert xts to data.frame # https://stackoverflow.com/a/69246047/2950721 for (i in seq_along(xtsObjects)) { assign(xtsObjects[i], fortify.zoo(get(xtsObjects[i]))) } # 1st column name from Index to Date # https://stackoverflow.com/a/69292036/2950721 for (i in seq_along(xtsObjects)) { tmp <- get(xtsObjects[i]) colnames(tmp)[colnames(tmp) == "Index"] <- "Date" assign(xtsObjects[i], tmp) } remove(tmp) Individually retreive the dates is pretty straightforward: max(AAPL$Date) max(IBM$Date) max(JD$Date) max(MPNGF$Date) max(MRNA$Date) max(NKLA$Date) max(RMO$Date) But when I try the following codes none of them would render or, better yet, store the most recent dates with corresponding origine (i.e., ticker): dataframeObjects <- names(which(unlist(eapply(.GlobalEnv, is.data.frame)))) # Tentative 1 for (i in seq_along(dataframeObjects)) { mostRecentDates <- max(dataframeObjects[i]$Date) } # Tentative 2 for (i in 1:length(dataframeObjects)) { mostRecentDates <- max(dataframeObjects[i]["Date"]) } Both tentatives give a [1]NA when invoking variable mostRecentDates . Important: In the final code there won't be any tickers and tickers2 variables. There will be a certain quantity of data.frame objects that will be loaded locally and it is those that will be searched for the last date available. My question: - What code is needed in order to store the most recent dates of all
data.frame objects (if possible by invoking dataframeObjects , but not tickers and tickers2 )? Thanks in advance. Systems used: - R version: 4.1.1 (2021-08-10)
- RStudio version: 1.4.1717
- OS: macOS Catalina version 10.15.7 and macOS Big Sur version 11.6
|
Node.js Mongodb connection pooling app.get how to? Posted: 30 Sep 2021 08:01 AM PDT The documents from mongodb say var express = require('express'); var mongodb = require('mongodb'); var app = express(); var MongoClient = require('mongodb').MongoClient; var db; // Initialize connection once MongoClient.connect("mongodb://localhost:27017/integration_test", function(err, database) { if(err) throw err; db = database; // Start the application after the database connection is ready app.listen(3000); console.log("Listening on port 3000"); }); // Reuse database object in request handlers app.get("/", function(req, res) { db.collection("replicaset_mongo_client_collection").find({}, function(err, docs) { docs.each(function(err, doc) { if(doc) { console.log(doc); } else { res.end(); } }); }); }); But if I put app.get("/", function(req, res) { db.collection("replicaset_mongo_client_collection").find({}, function(err, docs) { docs.each(function(err, doc) { if(doc) { console.log(doc); } else { res.end(); } }); }); }); in a script, I do not get a response. I tried writing to and reading from my collection. I did change the example to match my db items. I have express and the main code running when attempting to call this app.get from another program. I can read and write to db with opening and closing the connection on each call, but I am trying to figure out how to use the connection pool so I do not have to open and close db every time, I think maybe I am not calling it right? Any help would be appreciated! |
Connect to postgres on docker container via localhost is failed Posted: 30 Sep 2021 08:01 AM PDT I'm constructing a postgres server on docker container by docker run command as follows (environmental parameters are set properly). docker run \ --name zero2prod \ -e POSTGRES_USER=${DB_USER} \ -e POSTGRES_PASSWORD=${DB_PASSWORD} \ -e POSTGRES_DB=${DB_NAME} \ -p "${DB_PORT}":5432 \ -d postgres \ postgres -N 1000 But, psql command failed to connect the server. I typed the command as follows. PGPASSWORD="${DB_PASSWORD}" psql -h "localhost" -U "${DB_USER}" -p "${DB_PORT}" -d "postgres" The error message is this. psql: error: could not connect to server: FATAL: password authentication failed for user "postgres" Does anyone knows why the command failed? Note: I'm using windows 10 machine and docker environment was installed using Docker Desktop for Windows . |
Does anybody know what a ~BROMIUM directory is? It was related to a svg file Posted: 30 Sep 2021 08:01 AM PDT For some reason a directory ~BROMIUM was created when I added a svg file in my assets/img directory. When I pushed my git working branch to my remote GitHub branch nothing was wrong. I could also merge the branch with the development branch. But when I pulled the development branch it got the error message: cannot create directory at 'assets/img/~BROMIUM': No such file or directory I solved it by deleting the folder ~BROMIUM (what contained for some reason the svg file) in my GitHub development branch, and then I was able to pull everything correctly. But I am wondering what caused this? I did not created the folder myself and I could not find any information about it. |
Group by and order by range of age in MySQL Posted: 30 Sep 2021 08:01 AM PDT I have this query that display the age range and the total without problem: $sql="SELECT CASE WHEN (DATE_FORMAT(NOW(), '%Y') - DATE_FORMAT(date_nacer_format, '%Y') - (DATE_FORMAT(NOW(), '00-%m-%d') < DATE_FORMAT(date_nacer_format, '00-%m-%d'))) < 1 THEN '< 1 año' WHEN (DATE_FORMAT(NOW(), '%Y') - DATE_FORMAT(date_nacer_format, '%Y') - (DATE_FORMAT(NOW(), '00-%m-%d') < DATE_FORMAT(date_nacer_format, '00-%m-%d'))) <= 4 THEN '1-4 año' WHEN (DATE_FORMAT(NOW(), '%Y') - DATE_FORMAT(date_nacer_format, '%Y') - (DATE_FORMAT(NOW(), '00-%m-%d') < DATE_FORMAT(date_nacer_format, '00-%m-%d'))) <= 14 THEN '5-14 año' END AS age, COUNT(*) total FROM patients_appointments GROUP BY age"; $data['query']= $this->db->query($sql); In phpMyAdmin the table displays the data ordered by age as the query is. But when I display this table with php using codeigniter the table is not ordered by age. How can I solve this issue ? Thank you in advance. |
Is it possible to only test specific functions with doctest in a module? Posted: 30 Sep 2021 08:01 AM PDT I am trying to get into testing in Python using the doctest module. At the moment I do - Write the tests for the functions.
- implement the functions code.
- If Tests pass, write more tests and more code.
- When the function is done move on to the next function to implement.
So after 3 or 4 (independent) functions in the same module with many tests I get a huge output by doctest. And it is a little annoysing. Is there a way to tell doctest "don't test functions a() , b() and c() ", so that it runs only the unmarked functions? I only found the doctest.SKIP flag, which is not sufficient for my needs. I would have to place this flag in a lot of lines. And if I would want to check a marked function again, I would have to go manually through the code and remove any flag I set inside. |
No comments:
Post a Comment