All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I am using Javascript file to export splunk data from dashboard to CSV file. Issue I am facing is : for few records where strings are long , data is breaking into next line. I want to wrap... See more...
Hi All, I am using Javascript file to export splunk data from dashboard to CSV file. Issue I am facing is : for few records where strings are long , data is breaking into next line. I want to wrap those long strings in  " " to stop breaking data to next line. below is my code. could someone please help me to get expected result $('#exportBtn').on('click',function(e){ var searchObj= mvc.components.getInstance("rrc_main"); var myResults = searchObj.data('results',{ output_mode : 'json_rows', count:0 }); myResults.on("data",function(){ if(myResults.hasData()){ var data= myResults.data().fields.tostring().replace("Edit,",""); var rows = myresults.data().rows; $.each(rows, function(row){ data = data+ "\n"; for(var i=0; i< 53; i++){ if(rows[row][i]==="edit"){ continue; } if(rows[row][i]== null){ data= data +"\"\","; }else{ data = data +"\""+ rows[row][i].tostring()+"\","; } } }); Kindly help to wrap long strings in  " "  to read csv in proper format without breaking long strings in next line. Appreciate your help!   thanks, ND
We have a requirement to upgrade mongo DB to version 4.2 or later.  Can you please let me know what's the version of mongo DB used in Splunk 8.2.5.  If its not 4.2 or later, can you please let me... See more...
We have a requirement to upgrade mongo DB to version 4.2 or later.  Can you please let me know what's the version of mongo DB used in Splunk 8.2.5.  If its not 4.2 or later, can you please let me know if mongo DB can be upgraded separately. Will Splunk have any issues if Mongo DB upgrade is done. 
Here is a handy way to skim all the job results from - Rule and - Gen searches with ES to look for issues. | rest splunk_server=local count=0 /servicesNS/-/-/search/jobs/ | where match(label,"Ru... See more...
Here is a handy way to skim all the job results from - Rule and - Gen searches with ES to look for issues. | rest splunk_server=local count=0 /servicesNS/-/-/search/jobs/ | where match(label,"Rule$|Gen$") | table label, eai:acl.owner, eai:acl.app, isFailed, messages.warn, messages.fatal, messages.error
Currently I have a field holding a Julian date. I am trying to convert it using strftime but i'm having issues.   Date = 2022.091 Current query:     index = * | eval ConvertedDate = strftime(... See more...
Currently I have a field holding a Julian date. I am trying to convert it using strftime but i'm having issues.   Date = 2022.091 Current query:     index = * | eval ConvertedDate = strftime(DATE,"%Y.%j")| table ConvertedDate       Ideally I would like to get an output like 04/03/2022   Thank you, Marco
hello At the end of this subsearch I would like to be able to retrieve the results of the sum of Pb + Pb2 + Pb3 classed by name and town   index=abc sourcetype=toto | search rtt > 200 | stats ... See more...
hello At the end of this subsearch I would like to be able to retrieve the results of the sum of Pb + Pb2 + Pb3 classed by name and town   index=abc sourcetype=toto | search rtt > 200 | stats avg(rtt) as rtt by name town | eval Pb=if(rtt>200,1,0) | search Pb > 0 | append [ search `index=cde sourcetype=tutu | stats avg(logon) as logon by name town | eval Pb2=if(logon>300,1,0) | search Pb2 > 0 ] | append [ search index=efg sourcetype=titi | stats dc(id) as id by name town | eval Pb3=if(id>2,1,0) search Pb3 >5]   something like this   | stats sum(Pb1 + Pb2 + Pb3) by name town   could you help please?
Hi Could you please help me with using REX/REGEX inside eval? Here is what I'm trying to do  | makeresults | eval User="user1=test@domain.com | use1=test1" | makemv delim="|" User | mvexpand User... See more...
Hi Could you please help me with using REX/REGEX inside eval? Here is what I'm trying to do  | makeresults | eval User="user1=test@domain.com | use1=test1" | makemv delim="|" User | mvexpand User | fields - _time | eval signature="87347,123,1,0,84" | makemv signature delim="," | mvexpand signature | eval account=if(like(signature,"87347") AND like(User,"%@%" )," REGEX USER TO KEEP EVERYTHING BEFORE @ "," DONT MAKE ANY CHANGES , KEEP THE USER WITH @") Thanks     
Hi hope someone can help here. ta-ms-loganalytics have suddenly stopped working, i can see below type of errors being logged about the modular inputs  ERROR ModularInputs - Unable to initialize... See more...
Hi hope someone can help here. ta-ms-loganalytics have suddenly stopped working, i can see below type of errors being logged about the modular inputs  ERROR ModularInputs - Unable to initialize modular input "log_analytics" defined inside the app "TA-ms-loganalytics": Introspecting scheme=log_analytics: script running failed (killed by signal 9: Killed). raise ConnectionError(e, request=request)\nConnectionError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/TA-ms-loganalytics/data/inputs/log_analytics?count=0&output_mode=json (Caused by NewConnectionError('<solnlib.packages.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f118318e610>: Failed to establish a new connection: [Errno 111] Connection refused',))\n" any help much appreciated.
Hi All,   Getting very frequent alert for one of my search peer from DMC even though search head is up and working fine.  observed the below error in the logs    ERROR ExecProcessor - messa... See more...
Hi All,   Getting very frequent alert for one of my search peer from DMC even though search head is up and working fine.  observed the below error in the logs    ERROR ExecProcessor - message from "/xx/xx" Socket error communicating with splunkd (error=The read operation timed out) Can you please help me on this issue. Thanks in Advance.
04-07-2022 14:25:12.529 -0400 ERROR TailReader - Ran out of data while looking for end of header 04-07-2022 14:25:12.529 -0400 WARN AggregatorMiningProcessor - Breaking event because limit of 256 ha... See more...
04-07-2022 14:25:12.529 -0400 ERROR TailReader - Ran out of data while looking for end of header 04-07-2022 14:25:12.529 -0400 WARN AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded - data_source="/logs/gui/adcguidev5_ms11/adcguidev5_ms11.log", data_host="xxxxxxxxxxx", data_sourcetype="ADC-MSLOG" We are getting a number of these errors and would really like to clear them up. I am getting Ran out of data while looking for end of header. The logs is a weblogic managed server log.
Hi I am running a heavy forwarder with HEC and it is sending data to 3 indexers.  I am starting to read about ways to optimise this configuration, but I am not sure if I have all the settings.  ... See more...
Hi I am running a heavy forwarder with HEC and it is sending data to 3 indexers.  I am starting to read about ways to optimise this configuration, but I am not sure if I have all the settings.     [tcpout] defaultGroup = default-autolb-group [tcpout-server://hp923srv:9997] [tcpout-server://hp924srv:9997] [tcpout:default-autolb-group] disabled = false server = hp923srv:9997,hp924srv:9997,hp925srv:9997 [tcpout-server://hp925srv:9997]       Or if someone has a few settings that they know work. All machines are on 56 threads with HT on - so I have lots of CPU free. 1st - How to I monitor the history of the data coming in from the HF->indexers 2nd - Can you share some settings for the heavy forwarder and the indexers please to get the data into Splunk the fastest This is what I have read so far, but I am  not 100% sure about some of them any advice would be great parallelIngestionPipelines = X (This is to be set on the HF and the Indexer, i think) dedicatedIoThreads=Y (To be set on the HF) Thanks in advance Robert   
Hello Experts Please do not route to Splunk PS or Partner help. i want to do it by myself but with help of you experts. I have  1 HQ , and 2 main big branch and + 100  small branches, i want to h... See more...
Hello Experts Please do not route to Splunk PS or Partner help. i want to do it by myself but with help of you experts. I have  1 HQ , and 2 main big branch and + 100  small branches, i want to have visibility from all the sites what is the best design approach for this type of network. The data Ingestion is approximately 200 GB/Day which includes from all the sites ( HQ +Main Sites+ 100 branches) Thanks
Hey all, I have a 2019 windows client - sending some default data , but there are a handful of inputs that are visible on the inputs.conf in the apps folder (which some of the inputs on that very ... See more...
Hey all, I have a 2019 windows client - sending some default data , but there are a handful of inputs that are visible on the inputs.conf in the apps folder (which some of the inputs on that very same inputs.conf are working)  that are not showing up on the front end.  These same apps and corresponding inputs are working on non-windows 2019 FWDs/clients Any ideas? Thanks!
I have App_1 that is adding metadata in the inputs.conf file:     ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 ch... See more...
I have App_1 that is adding metadata in the inputs.conf file:     ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 1 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml = true host=WinEventLogForwardHost index = system-win _meta = machine_class::workstation     I now need to uniquely identify the host that the UF runs on. I expect that this would just be "_meta = uf_name::HostnameOfUF", BUT.... This App_1 is distributed to several hosts, and I cannot modify it to uniquely identify anything. Instead, I created a new app (App_2) consisting of just an inputs.conf with     ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] _meta = uf_name::HostnameX     Unfortunately, this never shows up, but I believe this is because the App_1 cannot "merge" the _meta with the _meta contained in App_2. How can I uniquely identify my host?
I have a 10GB Dev Licence including ITSI: Splunk Developer Personal License DO NOT DISTRIBUTE (with ITSI).  How can I download ITSI? Where can I get the download link?  
Hi, it seems the "splunkd service" process has significant CPU consumption (eg 40%; 31% and so on). These virtual machines have 2 cores. how many CPUs are recommended in a windows server running t... See more...
Hi, it seems the "splunkd service" process has significant CPU consumption (eg 40%; 31% and so on). These virtual machines have 2 cores. how many CPUs are recommended in a windows server running the splunk universal forwarders agent?
I have the following events in splunk:     company,name,email,status Acme,John Doe,john.doe@example.com,inactive Company Inc.,John Doe,john.doe@example.com,active HelloWorld Inc.,John Doe,john.... See more...
I have the following events in splunk:     company,name,email,status Acme,John Doe,john.doe@example.com,inactive Company Inc.,John Doe,john.doe@example.com,active HelloWorld Inc.,John Doe,john.doe@example.com,inactive Contoso,John Doe,john.doe@example.com,inactive Contoso,Mary Doe,mary.doe@example.com,inactive HelloWorld Inc.,Mary Doe,mary.doe@example.com,inactive       I want to create a new field called "cumulativeStatus" that will be "active" if that email is active in at least one row, and will be "inactive" if the person is inactive in all rows. Like this:     company,name,email,status,cumulativeStatus Acme,John Doe,john.doe@example.com,inactive,active Company Inc.,John Doe,john.doe@example.com,active,active HelloWorld Inc.,John Doe,john.doe@example.com,inactive,active Contoso,John Doe,john.doe@example.com,inactive,active Contoso,Mary Doe,mary.doe@example.com,inactive,inactive HelloWorld Inc.,Mary Doe,mary.doe@example.com,inactive,inactive       Is it possible, how?
Hello, Anyone can help me to provide the Javascript for the toggle buttons which we have used in the dashboard. Appreciated in advance
Hi Team,    There is a two reports one report(1st report) has timestamp other report(2nd report) doesn't have timestamp so, extracted date from source filename..the report which doesn't have timest... See more...
Hi Team,    There is a two reports one report(1st report) has timestamp other report(2nd report) doesn't have timestamp so, extracted date from source filename..the report which doesn't have timestamp have few fields which needs to be join from 1st report and 2nd report but sometimes the 2nd report will not be received. In that case, which need to be pick up the previous day or available previous file. is there any earliest comparison and pick the values from 2nd report? what command to be used in that case?
Hello Community, I have distributed environment with 2 indexers (each has 48 vCPU, 64gb RAM), which are ingesting 200 gb logs/day (each indexer). I want to send to them another 200 gb  syslog log... See more...
Hello Community, I have distributed environment with 2 indexers (each has 48 vCPU, 64gb RAM), which are ingesting 200 gb logs/day (each indexer). I want to send to them another 200 gb  syslog logs per day (for each indexer), but I want to filter the logs before indexing. I would index only 10% of 200gb of that additional syslog logs at each indexer, so 90% would be rejected. Could you please tell me what are hardware requirements for such setup? I couldn't find any hints.
Dear All, I'm taking the freedom, to write here, just, to see, if, maybe, it would be possible, to get, some, of your support (it would be, regarding, query parameters, in a dashboard URL) So,... See more...
Dear All, I'm taking the freedom, to write here, just, to see, if, maybe, it would be possible, to get, some, of your support (it would be, regarding, query parameters, in a dashboard URL) So, just, to describe, the scenario: We are creating, a dashboard, using Splunk Enterprise Dashboard Studio (version 8.2.5) In that dashboard, we have placed, some inputs (3 normal dropdowns, 3 multiselect) Now, as, you may, probably, imagine, for each input, we have, an associated token (token1, token2, etc.) (so far, I guess, it's clear  ) Now, as you may, probably, know, when we use, a Classic dashboard (and, when we access, a specific one), we, in the dashboard URL, can, see, the inputs names (and their values) (just, to provide, with a basic example): https://server:8000/en-US/app/app_name/dashboard_name?form.input1=val1&form.input2=val2 (so, as we can see, with a Classic dashboard, this, all, is, good) Now, in our case, we are using, Dashboard Studio (as, you may, probably know, here, the URL, gets displayed, slightly different) In our example: https://server:8000/en-US/app/app_name/dashboard_name Now, our client, asks, if, when accessing this dashboard, it would, be possible, to see, the inputs (and their values), in the URL (just, as we do, in Classic dashboards) (as per my research, I haven't been able, to find, this possibility) To sum up: Kindly wondering, if, maybe, some of you, would be, kind enough, to provide, with some guidance  Thanks a lot! Sincerely, Francisco