All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've got a dashboard created with Maps+ plotting the events on the map. The next section is a table of events. I'd like to schedule this to be an automated report, send to some folks. However, the PD... See more...
I've got a dashboard created with Maps+ plotting the events on the map. The next section is a table of events. I'd like to schedule this to be an automated report, send to some folks. However, the PDF reports just shows the table and "PDF Export does not support custom visualizations.   Anyone have any ideas how I could still accomplish what I'm hoping for?
I have a lookup table that lists all users along with their department like so:   email department --------------------------------------- user1@company.com Sales user2@company.com ... See more...
I have a lookup table that lists all users along with their department like so:   email department --------------------------------------- user1@company.com Sales user2@company.com Engineering user3@company.com Accounting user4@company.com Sales user5@company.com HR     I also have an index that list events for a particular application. The index contains lots of fields, but for my purposes, I'm really only interested in _time and actor.email. My goal is to count the number of days per week every user in a given department logs events in the index even if that number is zero.  I can get pretty close to what I want with this search:   index=whatever <base search here> | lookup user.csv email as actor.email OUTPUT department | bin _time span=1d | search department="Sales" | stats count as numEvents by _time, actor.email | eval weekNumber = strftime(_time,"%U") | stats dc(_time) as numDays by actor.email, weekNumber | xyseries actor.email, weekNumber, numDays     The problem with this search is that if there is a user in the lookup table who returned zero events during that time frame, they won't appear in the results.  I considered trying to append [|inputlookup user.csv] to the search, but because my append doesn't include a _time field, I can't get everything to line up correctly.   How do I run a search for every user in the correct department in the lookup table and return zero events per week if they didn't interact with the system?
Hi, I have created a field, "from", which is a concatenation of 2 string fields, as follows: index = ..... | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(... See more...
Hi, I have created a field, "from", which is a concatenation of 2 string fields, as follows: index = ..... | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(time_epoch) as hour_minute timeformat="%Y-%m-%d %H:%M" | strcat URL_PATH ":" SEQUENCE from | table  from The "from" field is made up of a URL string , a : character and then a number in string format. I need to create another field "to", so that for each Nth event where the respective "from" value ends in the number N, the corresponding "to" has  the URL for the (N+1) event, : and (N+1)th value. Example: from                                                  to ....:1                                                       ......:2 .....2                                                       .......:3 .....:3                                                      .......:4 ........................................ .........N                                                  <BLANK> In this way, the last value of the "from" field would have a blank "to" value. Essentially, I need to slid the "from" values up by 1 and name this other field as "to". I have tried Regex and different eval combinations but no success. Can you please help? Many thanks, P
Is it possible to pull in flow logs from an S3 bucket? The IAM role has been created but I'm not sure the data is being retrieved/parsed accurately. There was no input option for S3 when using the AW... See more...
Is it possible to pull in flow logs from an S3 bucket? The IAM role has been created but I'm not sure the data is being retrieved/parsed accurately. There was no input option for S3 when using the AWS add-on to pull in VPC flow logs(Only Kinesis or Cloudwatch). Can the input be configured manually or do we have to change where the vpc flow logs are stored? 
Hello my fellow Splunkers, i am trying to use a second index as a lookup for a field in the first index index=products contains the products serialNumbers1 index=inventory contains the products s... See more...
Hello my fellow Splunkers, i am trying to use a second index as a lookup for a field in the first index index=products contains the products serialNumbers1 index=inventory contains the products serialNumbersAll and productsNames serialNumbers1 is a subset of serialNumbersAll i need to table serialNumbers1 and the equvelant productsNames example: (index=products OR index=inventory) |table serialNumbers1 serialNumbersAll productsNames we get serialNumbers1 serialNumbersAll productsNames 111 222 333 444                                    111                           apple                                     222                          orange                                     333                          banana                                     444                          kiwi                                     555                                     666                                     777                                     888                                  the desired output is serialNumbers1 serialNumbersAll productsNames 111                                                              apple 222                                                              orange 333                                                              banana 444                                                              kiwi                                111                               apple                                 222                              orange                                333                               banana                                444                               kiwi                                555                               lemon                                666                               vege                                777                               potatoes                                888                               sweet potatoes notes: i have a huge set of data more than 200K so using eventstats is not an option as it hits the limit, increasing the limit is not an option also using a lookup table is not an option for me as well
I will be the first to admit I am by no means even a novice in SPLUNK. I am trying to fix an issue that was recently created due to the need to update a service account password that is associated wi... See more...
I will be the first to admit I am by no means even a novice in SPLUNK. I am trying to fix an issue that was recently created due to the need to update a service account password that is associated with SPLUNK. We recently changed the password to the account that runs the splunkd service.  the service started back up without any issues, however when I attempt to log into the SPLUNK webapp I get an unauthorized error. It seems like an obvious authentication issue but due to my lack of knowledge with SPLUNK and how it is setup I am not even sure where to begin looking.
Have a requirement to get Cisco AMP events into Splunk Cloud.  For Splunk Enterprise, I use python, but with no access to the back-end, how is it done in Cloud?  Their is no "Cisco AMP" TA, so at a l... See more...
Have a requirement to get Cisco AMP events into Splunk Cloud.  For Splunk Enterprise, I use python, but with no access to the back-end, how is it done in Cloud?  Their is no "Cisco AMP" TA, so at a loss (for the moment). 
Hello all, We receive the "splunkd.log" from every Universal Forwarder into our "_internal" index.  There are some events with log_level=ERROR that I need to analize, some of them are related to Po... See more...
Hello all, We receive the "splunkd.log" from every Universal Forwarder into our "_internal" index.  There are some events with log_level=ERROR that I need to analize, some of them are related to PowerShell script execution errors. The issue with this events is that the script outputs the error in several lines and the event is splitted in multiple events, all of them with the same "_time" (in the image below, the field "a_fechahora" is = _time)   I was able to merge the "a_mensaje" rows by "_time", but there are some issue with the order of the rows: E.g.  As you can see in green, the "Co" statement is incomplete, and it continues some lines below with the "mmandNotFoundException". Same happens with "or if a pat" (...) "h was included" Is this a common / known issue? Is there any way to prevent this messed lines in powershell outputs? Regards,
The advisory (https://www.splunk.com/en_us/product-security/announcements/svd-2022-0502.html) talks about Splunk Enterprise, but makes no mention of the Universal forwarder. Since UF has many of the... See more...
The advisory (https://www.splunk.com/en_us/product-security/announcements/svd-2022-0502.html) talks about Splunk Enterprise, but makes no mention of the Universal forwarder. Since UF has many of the same API features as Enteprise, and I do see verboseLoginFailMsg = true when running the btool utility, my assumption is that the UF is also vulnerable. Can someone confirm: 1. If my assumption is correct 2. If the same mitigation can be performed (so we can use deployment server to resolve) 3. Which version of UF is not vulnerable.   Thanks, Gord T.
We use Splunk dashboards with searches that refresh on regular intervals as screens to monitor in an operations center. Does anyone have experience with having new results (i.e. rows) light up, flash... See more...
We use Splunk dashboards with searches that refresh on regular intervals as screens to monitor in an operations center. Does anyone have experience with having new results (i.e. rows) light up, flash, do something else eye catching to grab attention?
Hi  I requested to exclude 2 values from one field value. I mean for each event I have "file_name", that written in the same shape. the city is first, and than the tool, so i want to extract th... See more...
Hi  I requested to exclude 2 values from one field value. I mean for each event I have "file_name", that written in the same shape. the city is first, and than the tool, so i want to extract these value for each event file_name city tool montreal - tool3 - SFR - Alert ID 123456 - (3 May 2022 01:20:24 IDT) montreal tool3
Hello Fellow Splunkers! The goal is to create ServiceNow Incidents/Events exclusively from Splunk Enterprise alerts using the Custom Alert action (we do not have Splunk ES or Splunk ITSI*).  I ha... See more...
Hello Fellow Splunkers! The goal is to create ServiceNow Incidents/Events exclusively from Splunk Enterprise alerts using the Custom Alert action (we do not have Splunk ES or Splunk ITSI*).  I have a distributed Splunk Enterprise deployment that contains an Indexer Cluster, Heavy Forwarder, and two Standalone Search heads (in addition to the Cluster Master and Deployment Server). I have yet to see this implementation work in a deployment with only Splunk Enterprise. Please let me know if this configuration is possible with an on-prem Splunk Enterprise deployment.  For context, I currently have the following configured, Splunk_TA_snow deployed to the Search Heads, Heavy Forwarder and Indexer Cluster (the add-on in the Indexer Cluster does not contain the inputs.conf file) Logs are being ingested via the Heavy Forwarder and the ServiceNow account is making successful connections to the Heavy Forwarder and Search Heads configured account. I have tried configuring the below on alerts with no luck I have also tried passing | snowincident within the alert's SPL to create a new incident in SNOW. Any help or tips will be greatly appreciated!
Hello! I would like to count from a field based on another field. I have a events with following  2 fields (Doors_Order & RQM_Order). I would like to count based on Doors_Order field from entire R... See more...
Hello! I would like to count from a field based on another field. I have a events with following  2 fields (Doors_Order & RQM_Order). I would like to count based on Doors_Order field from entire RQM_Order fields. In excel this look like this: =COUNTIF(E:E;C9)   I have tried with this: | basesearch | eventstats count(eval(RQMOrder_NotValidated=RQMOrder)) as ReqGap2 But this will count only if the 2 field is same in 1 event, not in entire events lists. I have tired lots of another things, but non of them worked. In excel this looks easy. Is there any solution in splunk?    
Hi all,  I am getting these kind of errors in Splunk when trying to create a new ticket in Jira and the body contains emojis or special characters via the App "JIRA Service Desk simple addon" creat... See more...
Hi all,  I am getting these kind of errors in Splunk when trying to create a new ticket in Jira and the body contains emojis or special characters via the App "JIRA Service Desk simple addon" created by @guilmxm  " signature="JIRA Service Desk ticket creation has failed!:'latin-1' codec can't encode character '\u2019' in position 315: Body ('’') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.  I´ve checked the issue with Jira admin team and as per them Jira DB is in UTF8 already, so might be the add-on configuration so I am wondering if there is any file where I can check and/or modify the add-on configuration in order to use UTF-8 or another workaround I can apply to solve the issue Thanks much in advance   
Hello,   i was actually hoping that would be rather straight forward. I can set width for panels, inputs, single charts, etc... However for some reason a table will not respond to the style s... See more...
Hello,   i was actually hoping that would be rather straight forward. I can set width for panels, inputs, single charts, etc... However for some reason a table will not respond to the style settings. I am using those formattings: #input_view_mode{width:100% !important;} => works for link list #customer_pie{width:33% !important;} => works for panel i can even set column width and line heights, but not reduce the table width to 80% of the panel width.   Any Ideas? Kind regards, Mike
Hi, I have a field which is a concatenation of a URL and a Sequence number, e.g. /google.ie:23 or /ebay.com:43. I need to order this string field in descending order base based on the string num... See more...
Hi, I have a field which is a concatenation of a URL and a Sequence number, e.g. /google.ie:23 or /ebay.com:43. I need to order this string field in descending order base based on the string number at the end of the field and then create 2 fields "To" and "From" showing: To                                From /yahoo.ie:1             /google.ie:2 /google.ie:2             /facebook.ie:3 ............................. At the moment, I am able to do the concatenation, but I am unable to sort on the numbers or create the required "To" or "From" fields: index = ..... | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(time_epoch) as hour_minute timeformat="%Y-%m-%d %H:%M" | strcat URL_PATH ":" SEQUENCE combo_time | table combo_time Can you please help? Many thanks, P
Hi, I have been asked to created a web-like visualization to capture webpages being hit over a time period. The following is an example of what the stakeholder has requested (similar to a Neo4j gr... See more...
Hi, I have been asked to created a web-like visualization to capture webpages being hit over a time period. The following is an example of what the stakeholder has requested (similar to a Neo4j graph):   I have tried using the network viz but it does not really compliment the time element to my data. Can anyone suggest another visualization in Splunk similar to what is being requested? Many thanks, P
Hello, please I would like to know where I can find documentation about events TableView.on('rendered'), ChartView.on('rendered') and, more generally, about TableView and ChartView objects. I can... See more...
Hello, please I would like to know where I can find documentation about events TableView.on('rendered'), ChartView.on('rendered') and, more generally, about TableView and ChartView objects. I cannot find detailed documentation on:  https://dev.splunk.com/ https://docs.splunk.com/DocumentationStatic/WebFramework/1.5/compref_table.html https://docs.splunk.com/DocumentationStatic/WebFramework/1.5/compref_chart.html The rendered event is used extensively in many SimpleXML extensions samples. (for example the ones that use custom cell renderer). Many thanks in advance and kind regards.
Has anyone found this error event in SOAR?    
Is there a way of showing a warning to the user based on their SPL. My use case is that users should not generally search indexes which are fed into an accelerated data model. Specifically it's fas... See more...
Is there a way of showing a warning to the user based on their SPL. My use case is that users should not generally search indexes which are fed into an accelerated data model. Specifically it's faster and more accurate to search the network_traffic ADM than a firewall index.