All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We use Splunk dashboards with searches that refresh on regular intervals as screens to monitor in an operations center. Does anyone have experience with having new results (i.e. rows) light up, flash... See more...
We use Splunk dashboards with searches that refresh on regular intervals as screens to monitor in an operations center. Does anyone have experience with having new results (i.e. rows) light up, flash, do something else eye catching to grab attention?
Hi  I requested to exclude 2 values from one field value. I mean for each event I have "file_name", that written in the same shape. the city is first, and than the tool, so i want to extract th... See more...
Hi  I requested to exclude 2 values from one field value. I mean for each event I have "file_name", that written in the same shape. the city is first, and than the tool, so i want to extract these value for each event file_name city tool montreal - tool3 - SFR - Alert ID 123456 - (3 May 2022 01:20:24 IDT) montreal tool3
Hello Fellow Splunkers! The goal is to create ServiceNow Incidents/Events exclusively from Splunk Enterprise alerts using the Custom Alert action (we do not have Splunk ES or Splunk ITSI*).  I ha... See more...
Hello Fellow Splunkers! The goal is to create ServiceNow Incidents/Events exclusively from Splunk Enterprise alerts using the Custom Alert action (we do not have Splunk ES or Splunk ITSI*).  I have a distributed Splunk Enterprise deployment that contains an Indexer Cluster, Heavy Forwarder, and two Standalone Search heads (in addition to the Cluster Master and Deployment Server). I have yet to see this implementation work in a deployment with only Splunk Enterprise. Please let me know if this configuration is possible with an on-prem Splunk Enterprise deployment.  For context, I currently have the following configured, Splunk_TA_snow deployed to the Search Heads, Heavy Forwarder and Indexer Cluster (the add-on in the Indexer Cluster does not contain the inputs.conf file) Logs are being ingested via the Heavy Forwarder and the ServiceNow account is making successful connections to the Heavy Forwarder and Search Heads configured account. I have tried configuring the below on alerts with no luck I have also tried passing | snowincident within the alert's SPL to create a new incident in SNOW. Any help or tips will be greatly appreciated!
Hello! I would like to count from a field based on another field. I have a events with following  2 fields (Doors_Order & RQM_Order). I would like to count based on Doors_Order field from entire R... See more...
Hello! I would like to count from a field based on another field. I have a events with following  2 fields (Doors_Order & RQM_Order). I would like to count based on Doors_Order field from entire RQM_Order fields. In excel this look like this: =COUNTIF(E:E;C9)   I have tried with this: | basesearch | eventstats count(eval(RQMOrder_NotValidated=RQMOrder)) as ReqGap2 But this will count only if the 2 field is same in 1 event, not in entire events lists. I have tired lots of another things, but non of them worked. In excel this looks easy. Is there any solution in splunk?    
Hi all,  I am getting these kind of errors in Splunk when trying to create a new ticket in Jira and the body contains emojis or special characters via the App "JIRA Service Desk simple addon" creat... See more...
Hi all,  I am getting these kind of errors in Splunk when trying to create a new ticket in Jira and the body contains emojis or special characters via the App "JIRA Service Desk simple addon" created by @guilmxm  " signature="JIRA Service Desk ticket creation has failed!:'latin-1' codec can't encode character '\u2019' in position 315: Body ('’') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.  I´ve checked the issue with Jira admin team and as per them Jira DB is in UTF8 already, so might be the add-on configuration so I am wondering if there is any file where I can check and/or modify the add-on configuration in order to use UTF-8 or another workaround I can apply to solve the issue Thanks much in advance   
Hello,   i was actually hoping that would be rather straight forward. I can set width for panels, inputs, single charts, etc... However for some reason a table will not respond to the style s... See more...
Hello,   i was actually hoping that would be rather straight forward. I can set width for panels, inputs, single charts, etc... However for some reason a table will not respond to the style settings. I am using those formattings: #input_view_mode{width:100% !important;} => works for link list #customer_pie{width:33% !important;} => works for panel i can even set column width and line heights, but not reduce the table width to 80% of the panel width.   Any Ideas? Kind regards, Mike
Hi, I have a field which is a concatenation of a URL and a Sequence number, e.g. /google.ie:23 or /ebay.com:43. I need to order this string field in descending order base based on the string num... See more...
Hi, I have a field which is a concatenation of a URL and a Sequence number, e.g. /google.ie:23 or /ebay.com:43. I need to order this string field in descending order base based on the string number at the end of the field and then create 2 fields "To" and "From" showing: To                                From /yahoo.ie:1             /google.ie:2 /google.ie:2             /facebook.ie:3 ............................. At the moment, I am able to do the concatenation, but I am unable to sort on the numbers or create the required "To" or "From" fields: index = ..... | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(time_epoch) as hour_minute timeformat="%Y-%m-%d %H:%M" | strcat URL_PATH ":" SEQUENCE combo_time | table combo_time Can you please help? Many thanks, P
Hi, I have been asked to created a web-like visualization to capture webpages being hit over a time period. The following is an example of what the stakeholder has requested (similar to a Neo4j gr... See more...
Hi, I have been asked to created a web-like visualization to capture webpages being hit over a time period. The following is an example of what the stakeholder has requested (similar to a Neo4j graph):   I have tried using the network viz but it does not really compliment the time element to my data. Can anyone suggest another visualization in Splunk similar to what is being requested? Many thanks, P
Hello, please I would like to know where I can find documentation about events TableView.on('rendered'), ChartView.on('rendered') and, more generally, about TableView and ChartView objects. I can... See more...
Hello, please I would like to know where I can find documentation about events TableView.on('rendered'), ChartView.on('rendered') and, more generally, about TableView and ChartView objects. I cannot find detailed documentation on:  https://dev.splunk.com/ https://docs.splunk.com/DocumentationStatic/WebFramework/1.5/compref_table.html https://docs.splunk.com/DocumentationStatic/WebFramework/1.5/compref_chart.html The rendered event is used extensively in many SimpleXML extensions samples. (for example the ones that use custom cell renderer). Many thanks in advance and kind regards.
Has anyone found this error event in SOAR?    
Is there a way of showing a warning to the user based on their SPL. My use case is that users should not generally search indexes which are fed into an accelerated data model. Specifically it's fas... See more...
Is there a way of showing a warning to the user based on their SPL. My use case is that users should not generally search indexes which are fed into an accelerated data model. Specifically it's faster and more accurate to search the network_traffic ADM than a firewall index.
Hi,  I am having the following query:  index=* sourcetype=CustomAccessLog | table "host", "source"   The output is: host source server32.de.db.com /path/to/server/instances/... See more...
Hi,  I am having the following query:  index=* sourcetype=CustomAccessLog | table "host", "source"   The output is: host source server32.de.db.com /path/to/server/instances/IFM_RT_1/logs/subdir_logs/log.file server31.de.db.com /path/to/server/instances/IFM_RT_2/logs/subdir_logs/log.file   I would need to alter the search query so that the output is becoming: host source 32 IFM_RT_1 31 IFM_RT_2   Tried using the following for the IFM_RT_ index=* sourcetype=CustomAccessLog | rex field=_raw "(?<IFM_RT_>.*)", but I couldn't get the needed data.  Can I have your help here?   Thanks!
I have column with Multiple Values separated by new line character Type is the column  ID     Type          Type_A 01     Type_B           Type_C I need new columns with values 1 or 0 for... See more...
I have column with Multiple Values separated by new line character Type is the column  ID     Type          Type_A 01     Type_B           Type_C I need new columns with values 1 or 0 for row (with ID 01) based on its presence for Type ID       Type            Type_A          Type_B       Type_C             Type_A           1                       1                    0 01      Type_B   02       Type_A              Type_B               Type_C
hello, I'm currently using Splunk enterprise with Udemy, but my license expired, and I can't go forward without renewing it my question is how do i go and renew the license if when i set Splunk up I ... See more...
hello, I'm currently using Splunk enterprise with Udemy, but my license expired, and I can't go forward without renewing it my question is how do i go and renew the license if when i set Splunk up I used the free version 
  Hi folks. I posted here recently exploring how to automatically add an index when docker-compose is bringing up the splunk container.  That post is at https://forums.docker.com/t/add-an-entrypoin... See more...
  Hi folks. I posted here recently exploring how to automatically add an index when docker-compose is bringing up the splunk container.  That post is at https://forums.docker.com/t/add-an-entrypoint-and-command-without-messing-up-the-invocation/123686 I had it working, but it no longer does, and I somewhat-suspect a docker volume problem as well as somewhat-suspect a permissions issue, and also somewhat suspect an OS upgrade.  But I really don't know what the problem is. Inside the splunk container, I see: [root@splunk splunk]# cat /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf [http] disabled = 0 [http://splunk_hec_token] disabled = 0 token = really-big-token-thingie Which is really not what I want. And outside the splunk container (on the MacOS side), I see: $ cat splunk-files/opt-splunk-etc-apps-splunk_httpinput-local/inputs.conf cmd output started 2022 Mon May 02 04:19:43 PM PDT [http] disabled = 0 [http://splunk_hec_token] disabled = 0 token = really-big-token-thingie index = dev_game-publishing That is what I want. In my docker-compose, I have (among other things) : volumes: - ./splunk-files/opt-splunk-etc-apps-splunk_httpinput-local/ /opt/splunk/etc/apps/splunk_httpinput/local/ (That long volume line is all-one-line.  It may or may not be wrapping when you view it, though it is wrapping in this editor) I tried both setting up a volume for the entire directory, as well as just that one file.    I'm hearing that doing an entire directory tends to be more reliable, but both failed the same way. The directory containing the file is owned by splunk and has restrictive permissions: [ansible@splunk splunk]$ cat /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf cat: /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf: Permission denied [ansible@splunk splunk]$ ls -l /opt/splunk/etc/apps/splunk_httpinput/ total 12 drwxr-xr-x 2 splunk splunk 4096 Jan 15 03:31 default drwx------ 2 splunk splunk 4096 May 2 22:14 local drwx------ 2 splunk splunk 4096 May 2 22:14 metadata [ansible@splunk splunk]$ Which explains why the ansible user can't cat it.  But is ansible painting itself into a corner and preventing itself from making all the changes I need? I also upgraded from MacOS 11.x to 12.3 in between when this was working, and when it stopped.  I don't know if that's related or not. I have next to no Splunk and even less Ansible. Thanks for any and all suggestions!
Hi. Has any one come across  hidden Double Quotes (") in a field and how to remove it? (maybe a "sed" regex) The double quotes don't appear in the Splunk Field or even in an excel csv export. I... See more...
Hi. Has any one come across  hidden Double Quotes (") in a field and how to remove it? (maybe a "sed" regex) The double quotes don't appear in the Splunk Field or even in an excel csv export. It only appears when you save the csv as a txt file.  Apparently Splunk sees it because it adds additional lines in my search. See below for Excel export and corresponding saved txt  It looks to be Quoting the contents of the AdminGroup field.   csv src_host AdminGroup computer1 Domain Admins LocalAdmins   Text file src_host        AdminGroup computer1 "Domain Admins                            LocalAdmins"  
hello splunkers,  I have build server using shuttle with VM ware Hyper-v i have splunk downloaded and installed on servers that i have created, i am able to use the ip addres on the splunk home p... See more...
hello splunkers,  I have build server using shuttle with VM ware Hyper-v i have splunk downloaded and installed on servers that i have created, i am able to use the ip addres on the splunk home page but when i put my login id and password i get this "The server encountered an unexpected condition which prevented it from fulfilling the request. Cl
Scenario: We have a data source of interest that we wish to analyze. The data source is hourly host activity events. An endpoint agent installed on a user's host monitors for specific events. The... See more...
Scenario: We have a data source of interest that we wish to analyze. The data source is hourly host activity events. An endpoint agent installed on a user's host monitors for specific events. The endpoint agent reports theses events to the central server, aka manager/collector. Then the central server sends the data/events to Splunk for ingest. We found that a distinct count of specific action events per hour per host is very interesting to us. If the hourly count per user is greater than "the normal behavior" average then we want to be alerted. We define normal behavior as the "90 day average of distinct hourly counts per host/user". We define an outlier/alert as an hourly distinct count above 2 standard deviations from the 90day hourly average. For instance, if the 90 day hourly average is 2 events for a host, then 10 events in a single hour for that host would fire an alert. We tried many different methods and found some anomalies. One issue is the events' arrival time to Splunk. Specifically, the data does not always arrive to Splunk in a consistent interval. The endpoint agent may be delayed in processing or sending the data to the central server if the network connection is lost or the running host was suspended/shutdown shortly after the events of interest occurred.  We have accepted this issue as its very infrequent. Methodology: In order to conduct our analysis we have multiple phases. Phase 1 > prepare the data and output to KVstore lookup We run a query to prime the historic data.         index=foo earliest=-90d@h latest=-1h@h foo_event=* host=* | timechart span=1h dc(foo_event) as Foo_Count by host limit=0 | untable _time host Foo_Count |outputlookup 90d-Foo_Coun         Then we modify and save the query to append the new data, we use the -2h@h and -1h@h to mitigate lagging events.  This report runs first every hour at minute=0.         index=foo earliest=-2@h latest=-1h@h foo_event=* host=* | timechart span=1h dc(foo_event) as Foo_Count by host limit=0 | untable _time host Foo_Count |outputlookup 90d-Foo_Count append=t          Phase 2 > calculate the upperBound for each user This report runs second every hour at minute=15.  We add additional statistics for investigation purposes.         |inputlookup 90d-Foo_Count |timechart span=1h values(Foo_Count) as Foo_Count by host limit=0 | untable _time host Foo_Count | stats min(Foo_Count) as Mini max(Foo_Count) as Maxi mean(Foo_Count) as Averg stdev(Foo_Count) as sdev median(Foo_Count) as Med mode(Foo_Count) as Mod range(Foo_Count) as Rng by host | eval upperBound=(Averg+sdev*exact(2)) | outputlookup Foo_Count-upperBound         Phase 3 > trim the oldest data to maintain a 90d@h interval This report runs third every hour at minute=30.         |inputlookup 90d-Foo_Count | eval trim_time = relative_time(now(),"-90d@h") | where _time>trim_time | convert ctime(trim_time) |outputlookup 90d-Foo_Count         Phase 4 > detect outliers This alert runs fourth (last) every hour the minute=45.         index=foo earliest=-1h@h latest=@h foo_event=* host=* | stats dc(foo_event) as as Foo_Count by host limit=0 | lookup Foo_Count-upperBound host output upperBound | eval isOutlier=if('Foo_Count' > upperBound, 1, 0)         This method is successful alerting on outliers. RE: event lag, we monitor and keep track of how significant. Originally, we tried using the MLTK with a DensityFunction and partial fit, however we have approximately 65 million data points which causes issues with the Smart Outlier Detection assistant. The question is whether anyone has a different or more efficient way to do this? Thank you for your time!  
I'm currently building a query that will pull data from today to April 26th,  the field value contains the following time format      termination_initiated (field value name) 2022-05-02T11:47... See more...
I'm currently building a query that will pull data from today to April 26th,  the field value contains the following time format      termination_initiated (field value name) 2022-05-02T11:47:01.011-07:00 2022-05-02T11:42:10.820-07:00      I'm currently trying to convert is so that i can only get results between today and April 26th. I've tried this piece of code with no luck     | eval terminiation_started=strptime(termination_initiated,"%Y-%m-%dT %H:%M:%S.%QZ") | where termination_started>=relative_time(now(),"-6d@d")