All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

trying to extract using this regular expression below  rex field=_raw "name\=\w+\s+(?<business_field>.*)" I'm struggling to extract the from below text, i want to extract the bold part Cap... See more...
trying to extract using this regular expression below  rex field=_raw "name\=\w+\s+(?<business_field>.*)" I'm struggling to extract the from below text, i want to extract the bold part Capabilities [{capabilityNodeId=http://127.0.0.1:5000, extra.executor.id={run.name= [FraudLogsCard] ATMX Logs Request/Extraction/Attach 2.1.3, task.uuid=c9999999-9999-49999-9999-99999999,        
Hello, I am looking to create a new field based on a section from a longer string/web address. I didn't see what i was looking in a search on this site. I assume its possible with a regex but I am no... See more...
Hello, I am looking to create a new field based on a section from a longer string/web address. I didn't see what i was looking in a search on this site. I assume its possible with a regex but I am not good at those.  Here are examples of fields I currently have... I am looking to extract the Location part of the string. The Company\Segment\Name does not change before Location. Location will change and is what I need.  \Company\Segment\Name\Location\Area \Company\Segment\Name\Location\Area I do not need the Area.  Thanks,
Hi guys, when I restart my splunkd on the HF I see this error in the logs message:  splunk: Invalid key in stanza [duo_input] in /opt/splunk/etc/apps/duo_splunkapp/default/inputs.conf, line 11: pyth... See more...
Hi guys, when I restart my splunkd on the HF I see this error in the logs message:  splunk: Invalid key in stanza [duo_input] in /opt/splunk/etc/apps/duo_splunkapp/default/inputs.conf, line 11: python.version  (value:  python3). This is that file: [duo_input] ikey = skey = api_host = index = duo interval = 120 source = duo host = duo_api sourcetype = json python.version = python3   What could be happening?   Thanks in advance!
I was working on something like the following. I have users that are coming from pages and I want to track the trends of where they are coming form. Ideally I want to show only the trendlines and hav... See more...
I was working on something like the following. I have users that are coming from pages and I want to track the trends of where they are coming form. Ideally I want to show only the trendlines and have multiple showing. So far I am coming up short on how to accomplish this. | timechart count by previousPage  | trendline ems7(count) as PageTrend   In the end I essentially want only the trendline by previous page to show.
I have table from the Dashboard, where I need to change color of whole row based on status. my table will look like this.  Version Count Status win 2012 20 compliance win ... See more...
I have table from the Dashboard, where I need to change color of whole row based on status. my table will look like this.  Version Count Status win 2012 20 compliance win 2008 35 Non-Compliance Xen 2.4 40 compliance win 2016 24 Non-Compliance   Look for result like this Can someone able to help me please with XML. is it possible with out using the CSS or JS. Thank you.
As the title suggests, Im getting the following error when trying to execute a custom alert action script. The script is quite simple. Its a shell script that basically looks like this: #!/bin/ba... See more...
As the title suggests, Im getting the following error when trying to execute a custom alert action script. The script is quite simple. Its a shell script that basically looks like this: #!/bin/bash if [[ "$1" == "--execute" ]]; then https_proxy=proxyname:port curl --header "content-type: text/soap+xml; charset=UTF-8" --data @alertBody.xml https://url/api fi If I execute this through the command line using: sh alert.sh --execute, it works perfectly. But I get the above error instead. It references the script in the following way: ERROR ScriptRunner - Couldn't start child process. script="/opt/splunk/etc/apps/alert_app/bin/alert.sh --execute" I am not trying to give arguments to the script. It's a simple script that posts to an API with predetermined text that's always the same in the xml body. My alert actions looks as follows: [alert] is_custom=1 label=alertTest icon_path=logevent.png disabled=0 Adding some fields didn't help, but maybe someone can help me find which ones are mandatory? I copied the png from another alerting app and placed it in the same folder.  
Hi All, Need to combine 2 index together and also need the values to be added/summed together. Code 1 :    index=nw_syslog message_type="BGP-5-ADJCHANGE" | stats count by nodelabel, message_type ... See more...
Hi All, Need to combine 2 index together and also need the values to be added/summed together. Code 1 :    index=nw_syslog message_type="BGP-5-ADJCHANGE" | stats count by nodelabel, message_type | table nodelabel, message_type, count   Table 1 :  nodelabel message_type count AOKBF BGP PEER LOST 2 CMPRS BGP PEER LOST 2   Code 2:    index=opennms | stats count by nodelabel, message_type | table nodelabel, message_type, count   Table 2:  nodelabel message_type count AOKBF BGP PEER LOST 3 CMPRS BGP PEER LOST 3   I used append and also  join type=outer nodelabel  but the value is not added.   Expected  Table Final :  nodelabel message_type count AOKBF BGP PEER LOST 5 CMPRS BGP PEER LOST 5
Can i get the following words that are bold extracted 1.  [ERROR] org.openqa.selenium.TimeoutException 2020-10-16 13:11:42 [machine-run-555555-hit-1087581-step-555] TSXLogAttachmentRobot [ERROR] ... See more...
Can i get the following words that are bold extracted 1.  [ERROR] org.openqa.selenium.TimeoutException 2020-10-16 13:11:42 [machine-run-555555-hit-1087581-step-555] TSXLogAttachmentRobot [ERROR] org.openqa.selenium.TimeoutException: Expected condition failed: waiting for number of open windows to be 2 (tried for 30 second(s) with 500 MILLISECONDS interval) 2. Frzzz Logs Business Process v2.0.7 (TTTxLogAttachment) Capabilities [{capabilityNodeId=http://127.0.0.1:5000, extra.executor.id={run.name=[Digiminds - FraudLogs] Part 2 v.2.0.7, task.uuid=c65b1153-bd19-4c32-b186-26ae21ca237b, task.name=Frzzz Logs Business Process v2.0.7 (TTTxLogAttachment), 3. the word [INFO] 2020-10-16 15:37:17 [bp-[25cf86e3]-completeMachineRun-569576] HitService [INFO] Snapshot creation for Run: id=569576, uuid=d60be317-fcaa-4d96-89f5-8144216bdd28 name=Debt Structure Project v2.0.22 (MainframeCpsRobot) {size:1, status:COMPLETED, rootRun:25cf86e3-2b33-4ee6-85b0-a303cb612efc, data:} was skipped due to snapshot generation preferences or it is final step 4. the word [DEBUG] 2020-10-16 15:28:00 [TTTTTTTTTT_Worker-44] HitService [DEBUG] Step description for run 20cda5dd-3081-4660-be90-f2103c52a716 from campaign c701b1b7-96f3-46b6-a408-61b18d066e45 is null
I have created the search below which:  Filters out by only hostnames that I want Then extracts the STIG ID from those results Then extracts the controls status Lastly, consolidating Errors, Fai... See more...
I have created the search below which:  Filters out by only hostnames that I want Then extracts the STIG ID from those results Then extracts the controls status Lastly, consolidating Errors, Failed, and Warnings into a group of 'failed' controls with the remaining being "Passed" What I would like to do is identify any controls that have passed across all of the hostnames and vice versa identify the controls that have failed across all of the host names. Example: 15 STIG ID(s) have Failed across all hosts. 200 STIG ID(s) have passed a crossed all hosts. Failed Passed 15 200   index="tenable" sourcetype="tenable:sc:vuln" repository="Audit Repository" [ inputlookup windows10_hostnames.csv | fields dnsName ] | rex field=pluginName "(?<stigid>\w{4}\S\w{2}\S\d{6})\s+.*" | rex field=pluginText "\<cm\:compliance-result\>(?<status>\w+)\<\/cm\:compliance-result\>" | eval passFail=if(IN(status,"ERROR","FAILED","WARNING"), "Failed","Passed")   I tried appending the below to the end of this query. While it's interesting data, I'm having a hard time figuring out the comparison and filtering to get the desired output in the table above.   | stats values(stigid) by dnsName passFail | stats count by dnsName passFail   Any help is much appreciated.  
Hi Team, I want to schedule an alert something like there is no event for a particular index for more than 15 minutes it should trigger an email notification to our team. For example: Index= os So... See more...
Hi Team, I want to schedule an alert something like there is no event for a particular index for more than 15 minutes it should trigger an email notification to our team. For example: Index= os So kindly help with the query is setting up the same.  
Dear Support I make two searches – for the same time-period, ex last 7 days, and on the same data (index):     1. index=db_oracle sourcetype="oracle:audit:text" (OMEGACA OR OMEGA_CORE_AUDIT) 2... See more...
Dear Support I make two searches – for the same time-period, ex last 7 days, and on the same data (index):     1. index=db_oracle sourcetype="oracle:audit:text" (OMEGACA OR OMEGA_CORE_AUDIT) 2. index=db_oracle sourcetype="oracle:audit:text" (ACTION=*OMEGACA* OR ACTION=*OMEGA_CORE_AUDIT*)     The idea is to look for SYSDBA actions on application objects. Search 1 – a wide-search – completes very quickly in a reasonable time (few seconds) Job Inspect: This search has completed and has returned 28 results by scanning 85 events in 1.438 seconds Search 2 – a field-search – has a big delay and it is very slow. Job Inspect: This search has completed and has returned 28 results by scanning 1,264,230 events in 156.1 seconds   Problem: I was expecting the Search 2 to be faster (or at least equal) compared to Search 1. I can see that the second search scans 1,264,230 events while Search 1 only 85 Why do I have such a big performance slowness in my second Search?
We are trying to send data to raw endpoint via Splunk HEC. When we do so, the data is always sent only to the default index and is not sent to the other indexes. Can someone guide us on how to have t... See more...
We are trying to send data to raw endpoint via Splunk HEC. When we do so, the data is always sent only to the default index and is not sent to the other indexes. Can someone guide us on how to have this resolved? Any idea? The scenario is as below: [http://LGS-HEC-PROD] disabled = 0 index = index_one indexes = index_one,index_two,index_three,index_four,index_five token = <OUR_HEC_TOKEN> We are trying to send data from splunk-library-javalogging and to the raw endpoint of our Splunk HEC. So, whenever we send changing the index from index_one to index_two or index_three, the events are still written to the index_one(Which is the default index index = index_one). This is not happening with the event endpoint and happens only with the raw endpoint. Is this a limitation with the Splunk HEC or are we missing something on this. Please advise.
Hi, We are trying to retrieve configuration both for AD and LDAP using the "Microsoft LDAP App" for Phantom using a new Playbook, but before that we want to get connection working. We have an asset... See more...
Hi, We are trying to retrieve configuration both for AD and LDAP using the "Microsoft LDAP App" for Phantom using a new Playbook, but before that we want to get connection working. We have an asset with our LDAP server (user and password working) but when we make the "Test connectivity" it shows us this message:   There is no place to put the Base DN, how can we get the connection done? Thanks in advance!
Hello, I'm having trouble figuring out how to use foreach + eval getting the difference of the fields. I have something like this:    You can use this search to obtained the above result:    ... See more...
Hello, I'm having trouble figuring out how to use foreach + eval getting the difference of the fields. I have something like this:    You can use this search to obtained the above result:    | makeresults | eval Country="PH" | eval "2020-01 Actual"=1 | eval "2020-01 Forecast"=2 | eval "2020-02 Actual"=5 | eval "2020-02 Forecast"=4 | eval "2020-03 Actual"=50 | eval "2020-03 Forecast"=20 | append [| makeresults | eval Country="IND" | eval "2020-01 Actual"=3 | eval "2020-01 Forecast"=3 | eval "2020-02 Actual"=2 | eval "2020-02 Forecast"=2 | eval "2020-03 Actual"=40 | eval "2020-03 Forecast"=23 ] | append [| makeresults | eval Country="SG" | eval "2020-01 Actual"=2 | eval "2020-01 Forecast"=4 | eval "2020-02 Actual"=1 | eval "2020-02 Forecast"=9 | eval "2020-03 Actual"=30 | eval "2020-03 Forecast"=53 ] | fields - _time   And I'm trying to use foreach/eval to get this:   Thanks in advance  
Hi There, Need to combine these two searches meaningfully, can someone help please.   1st Query: index=xyz .... | chart count(serviceName) as total count(eval(isPolicySuccessful="true")) as succ... See more...
Hi There, Need to combine these two searches meaningfully, can someone help please.   1st Query: index=xyz .... | chart count(serviceName) as total count(eval(isPolicySuccessful="true")) as successTotal by serviceName   which gives something like below; serviceName      total     successTotal srvc1                   26429       26344 srvc2                       80               80 srvc3                        12              12   2nd Query: index=xyz .... | bin _time span=1s | stats count AS TPS by _time serviceName | eventstats max(TPS) as peakTPS by _time serviceName | eval peakTime=if(peakTPS==TPS,_time,null()) | chart max(TPS) AS "PeakTPS" eval(round(avg(TPS),2)) AS "AVG TPS" min(TPS) AS "MinTPS" first(peakTime) as peakTime by serviceName | fieldformat peakTime=strftime(peakTime,"%x %X") which gives something like below: serviceName     PeakTPS       AVG TPS      MinTPS        peakTime srvc33                11                         1.64                 1             10/15/20 16:34:40 srvc1                     1                          1.00                 1             10/15/20 16:44:42 srvc5                    2                           1.63                 1             10/15/20 20:35:22   Now the problem is how to merge these two results into a meaningful one? something like below: serviceName      total     successTotal   PeakTPS       AVG TPS      MinTPS        peakTime srvc1                   26429       26344                  1                          1.00                 1             10/15/20 16:44:42   Please help!
Hi all, I'm looking for and old version (but the latest one..) of the Universal Forwarder compatible with Windows 7 (64 bit) and Windows 2008 R2 (64 bit) Operating Systems. Can you please send me a... See more...
Hi all, I'm looking for and old version (but the latest one..) of the Universal Forwarder compatible with Windows 7 (64 bit) and Windows 2008 R2 (64 bit) Operating Systems. Can you please send me a link where to donwload them? In the official Splunk download page for the older releases I cannot find them. Thank you Regards Marco    
hi, I create a search with a join, but I want to know if there is a better way to do (append ?) : index=AAA sourcetype="bbb"  | table _time Id | join Id [ search index=AAA sourcetype="ccc"  | t... See more...
hi, I create a search with a join, but I want to know if there is a better way to do (append ?) : index=AAA sourcetype="bbb"  | table _time Id | join Id [ search index=AAA sourcetype="ccc"  | table Id name price ] Can you help me ? thanks !
  Hi all, Anyone know how to turn the green column chart to a linechart. I have used the properties "charting.chart.overlayFields":"% dispo" but no success. It seems like this properties is no... See more...
  Hi all, Anyone know how to turn the green column chart to a linechart. I have used the properties "charting.chart.overlayFields":"% dispo" but no success. It seems like this properties is not working with SplunkJS Stack. Any thought? Thanks for your time. Looking forward to hearing your discussion.  
Hello everyone, I was reading through the docs and a question came to my mind. Does Splunk have different notions of time that exists in stream processing products like Flink or Kafka? Flink has ev... See more...
Hello everyone, I was reading through the docs and a question came to my mind. Does Splunk have different notions of time that exists in stream processing products like Flink or Kafka? Flink has event time, ingestion time and processing time for all the events that arrive and uses complex algorithms for handling event time and processing time differences, like watermarks. From what I see from the docs, Splunk has a single concept of time in the form of timestamps that are added to the events that arrive at the system and ignores the event time, the actual time when the event has been created. Am I right or am I missing something? Thanks.
After downloading Splunk, I tried to connect to Splunk Enterprise and was successful for two separate sessions over a three day span.  Now when I attempt to connect I receive an ... See more...
After downloading Splunk, I tried to connect to Splunk Enterprise and was successful for two separate sessions over a three day span.  Now when I attempt to connect I receive an error on the new window: Either localhost:8000 refused to connect or HTTP 404 error As this is the second occurrence, I have already uninstalled and reinstalled twice on my desktop and laptop.  Still no change in performance.  After 2-3 sessions the next attempt and everyone there after results in a failed connection.  I have tried using Microsoft Edge, Internet Explorer and Google Chrome. I have tried to connect to 127.0.0.1and that works (I was told that could be the issue regarding the localhost error). Any help is greatly appreciated!! Thanks, KC