All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to know if it is possible to open an incident in ServiceNow from notable event console without having ServiceNow integrated with Splunk. I mean we can install Splunk addon for ServiceNow but ... See more...
I want to know if it is possible to open an incident in ServiceNow from notable event console without having ServiceNow integrated with Splunk. I mean we can install Splunk addon for ServiceNow but we can't install addon for Splunk on ServiceNow side (the project team in charge of ServiceNow refuse to install the app on ServiceNow). We can only use Splunk addon fo ServiceNow and ServiceNow APIs.
Hi   Is there an option to add MFA to my Splunk Base account where I upload new apps and versions?
Hi, I'm trying to extract logs via API using /v2/event/find Found here: Retrieve Events V2 | API Reference | Splunk Developer Program However, the results I'm trying to get does not match with wha... See more...
Hi, I'm trying to extract logs via API using /v2/event/find Found here: Retrieve Events V2 | API Reference | Splunk Developer Program However, the results I'm trying to get does not match with what I had in mind. (The results are similar to the examples in the link)     [ [-] { [-] id: "AddBYZrEFEF", metadata: { [-] ETS_key1: "detector", ETS_key2: false, ETS_key3: 1001 }, properties: { [-] is: "ok", sf_notificationWasSent: true, was: "anomalous" }, sf_eventCategory: "USER_DEFINED", sf_eventType: "string", timestamp: 1554672630000, tsId: "XzZYApXCDCD" } ]     What I'm trying to get are raw messages from the Logs Observer in Splunk SignalFX (image below) The json object I receive are just similar to the example, and not the messages we are ingesting. I need to extract a set with parameters/filters added. I'm expecting the result to be like this:     { "timestamp": "Feb 14 2023T12:00:00+0800", "message": "Error 404: /path/service/action", "severity": "ERROR", "service": "myApp-service" }      How do I extract it?
Hello Splunkers, I would like to know which products (Add-ons or Apps ) are supporting 'Web' data model? Is there a way to directly check Data Model by looking into an Add-on or an App? Thankyou in... See more...
Hello Splunkers, I would like to know which products (Add-ons or Apps ) are supporting 'Web' data model? Is there a way to directly check Data Model by looking into an Add-on or an App? Thankyou in advance, Varun Kohli
Hi Team, I'm a newbie to Splunk. I tried to install the Splunk Enterprise in my server and then it asked for the account type - Local, Domain and Virtual. I couldn't understand when to use which typ... See more...
Hi Team, I'm a newbie to Splunk. I tried to install the Splunk Enterprise in my server and then it asked for the account type - Local, Domain and Virtual. I couldn't understand when to use which type of account. Can anyone clearly explain what the account types are, when it is used and under which conditions? Thanks
Hi All, Good day, we are getting Duplicate logs in Splunk for multiple sources with same event example below how to avoid duplicate logs  index=ivz_unix_linux_events _raw="[2023-02-14 02... See more...
Hi All, Good day, we are getting Duplicate logs in Splunk for multiple sources with same event example below how to avoid duplicate logs  index=ivz_unix_linux_events _raw="[2023-02-14 02:22:01.363] [TRACE] shiny-server - Uploading metrics data..."   2/14/23 1:52:01.363 PM [2023-02-14 02:22:01.363] [TRACE] shiny-server - Uploading metrics data... host = usapprstdld101source = /var/log/shiny-server.logsourcetype = shiny-server 2/14/23 1:52:01.363 PM [2023-02-14 02:22:01.363] [TRACE] shiny-server - Uploading metrics data... host = usapprstdld101source = /var/log/shiny-server.logsourcetype = shiny-server
Hi, we are running a splunk single server deployment with version 8.29. Now we installed the Splunk Add-on for AWS Version 6.3.1 . But after the installation the Splunk Add-on for AWS, we are unab... See more...
Hi, we are running a splunk single server deployment with version 8.29. Now we installed the Splunk Add-on for AWS Version 6.3.1 . But after the installation the Splunk Add-on for AWS, we are unable to configure the AWS Add-On. We got the following error message   How can we solve this situation.   Best regards, Klaus
I'm looking to add some column formatting to some table in dashboard studio - but the option is greyed out saying the column is an array, why is this ? and can i re-factor my search to make it work? ... See more...
I'm looking to add some column formatting to some table in dashboard studio - but the option is greyed out saying the column is an array, why is this ? and can i re-factor my search to make it work? index=test AND host="test" sourcetype=test | stats latest(state) latest(status) by host name state status | stats list(name) as NAME list(state) as STATE list(status) as STATUS by hos
All, I am having an issue with splunk and uploading Pcap files to the server we are running on prem. When I attempt to upload a Pcap file via the data inputs option I receive the following error.  ... See more...
All, I am having an issue with splunk and uploading Pcap files to the server we are running on prem. When I attempt to upload a Pcap file via the data inputs option I receive the following error.  Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/launcher/data/inputs/upload_pcap: The read operation timed out',) My thinking is it could be too big and is timing out the daemon. If that is the case can this timeout be edited?  Pcap files are in the 139MB size range.  Thank you for any recommendations or insight. 
If an HF is used for a intermediate / aggregation tier and the data is parsed,  what does the ingestion pipeline look like when it hits the indexer.  That is, if the HF does parsing, aggregation, typ... See more...
If an HF is used for a intermediate / aggregation tier and the data is parsed,  what does the ingestion pipeline look like when it hits the indexer.  That is, if the HF does parsing, aggregation, typing, but not indexing, does the data flow through those same queues at the indexer? Or is the data injected directly in the the indexing queue?
Hi Everyone, Im trying to stop the following index from being indexed into Splunk using the props/transforms confs  on HF but with no luck - What am i doing wrong here ?  props.conf [pan:userid... See more...
Hi Everyone, Im trying to stop the following index from being indexed into Splunk using the props/transforms confs  on HF but with no luck - What am i doing wrong here ?  props.conf [pan:userid] TRANSFORMS-set-nullqueue=set_nullqueue transforms.conf [set_nullqueue] REGEX=. DEST_KEY=queue FORMAT=nullQueue   Thank you!!
All, The OOTB Kafka JMX configuration is incorrect.  Maybe it was correct for an older version of the Kafka client library, but it's not correct any longer.  For example, there are metrics defined u... See more...
All, The OOTB Kafka JMX configuration is incorrect.  Maybe it was correct for an older version of the Kafka client library, but it's not correct any longer.  For example, there are metrics defined using the object name match pattern "kafka.consumer:type=consumer-fetch-manager-metrics,*" such as records-lag-max.  However records-lag-max is not an attribute of any object matching that pattern. I have fixed this problem manually and disabled the OOTB configuration.  My new configuration uses the object name match pattern "kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=*" and the instance identifier "client-id,topic."  My new metrics do show up in the metric browser.  If the client id is myClientId and the topic is myTopic, the metrics show under JMX|ConsumerMetrics|myClientId|myTopic. However I cannot seem to use the new metrics in a dashboard.  When adding a time series to a dashboard, I can choose "JMX|ProducerMetrics" or "JMX|ConsumerMetrics" as the JMX objects to include.  But when selecting the actual metric to display, I can't see any of my new metrics.  I can only see the OOTB metrics like Average IO Wait.  When selecting the JMX objects to include or when selecting the metrics to display, I cannot drill down further than ProducerMetrics or ConsumerMetrics, even though there are two levels below (corresponding to the client id and topic.) thanks
Hey everyone, I want to create a search that gives me the following information in a structured way: Which type of host sends data to which type of host using which port? In a table it would basical... See more...
Hey everyone, I want to create a search that gives me the following information in a structured way: Which type of host sends data to which type of host using which port? In a table it would basically look like this: typeOfSendingHost|typeOfReceivingHost|destPort At the moment I have the following search, which shows me which type of host is listening on which port. The subsearch is used to provide the type of system based on splunkname. Therefore, the field splunkname is created in the main search. (index="_internal" group=tcpin_connections) |rename host AS splunknames |join type=left splunkname [|search index=index2] |stats values(destPort) by type Example Output: type values(destPort) Indexer 9995, 9997 Intermediate Forwarder 9996, 9997   In the _internal index, the sending system is stored in the field "hostname" and the receiving system is stored in "host". The field "destPort" is the port to which data is sent. Information about our systems is stored in index2. An event in index2 has the field "splunkname" and "type". The field "splunkname" in index2 contains the hostname of the system (e.g. fields hostname/host). The field "type" stores the type of the system (Forwarder, Indexer, Search Head...). My question is, how can I make the results look like this? Sending System Type Receiving System Type destPort Intermediate Forwarder Indexer 9997 Thank you so much in advance
Where do I set columns to wrap text?  The old dashboards had a wrap results field.
I have the Service Now add-on for Splunk installed and I'm referencing this document for configuring  ServiceNow as a trigger action. Here's a screenshot from the doc for reference:   My questi... See more...
I have the Service Now add-on for Splunk installed and I'm referencing this document for configuring  ServiceNow as a trigger action. Here's a screenshot from the doc for reference:   My question is, can steps 7 and 8 be done via the Splunk the API? I have about 100 alerts and what I'd like to do is perform steps 7 and 8 programmatically (Where I create a trigger action that uses ServiceNow Incident Integration and populates some of the values)
Hi, - We are currently having Splunk On-prem with single site (Site A) Version: 7.3.4     10 Indexers, 5 SH's, 2 DS, 2 HF's, CM, DMC, LM (Distributed Clustered) - We have recently build anothe... See more...
Hi, - We are currently having Splunk On-prem with single site (Site A) Version: 7.3.4     10 Indexers, 5 SH's, 2 DS, 2 HF's, CM, DMC, LM (Distributed Clustered) - We have recently build another site  (Site B) Version 9.0.1    10 Indexers, 4 SH's, 2 DS, 2 HF's, CM, DMC, LM (Distributed Clustered) I have few below questions How can we transfer/move the entire data from our Site A to our new site B? What is the order to follow? what are the Pre-Req's? What would be the expected down time for this migration process? is there any clear implementation documentation/steps? Thanks in advance and your help would be highly appreciate!
I have two searches that will return orderNumbers 1. index=main "Failed insert" | table orderNumber //returns small list 2. index=main "Successful insert" | table orderNumber //returns huge l... See more...
I have two searches that will return orderNumbers 1. index=main "Failed insert" | table orderNumber //returns small list 2. index=main "Successful insert" | table orderNumber //returns huge list I want a list of "Failed insert" orderNumbers that have NOT had a "Successful insert" previously. How can I use the results of the second search to filter the results of the first search? 
Not sure if this is possible through Splunk query but what i am trying to do is basically retrieve field value from one search and pass it into another and same is to be done twice to get desired res... See more...
Not sure if this is possible through Splunk query but what i am trying to do is basically retrieve field value from one search and pass it into another and same is to be done twice to get desired result Lets consider below 3 different events as _raw data 14:06:06.932 host=xyz type=xyz type_id=123 14:06:06.932 host=xyz type=abc category=foo status=success 14:30:15.124 host=xyz app=test now 1st and second event are going into same index and sourcetype but 3rd event is in different index & sourcetype 1st and 2nd event are happening at exact same time. Expected result is to return following field values host type type_id category status app Below is my search in which i am able to successfully correlate and find category and status field from second event index=foo sourcetype=foo type=xyz |eval earliest = _time |eval latest = earliest + 0.001 |table host type type_id earliest latest |map search="search index=foo sourcetype=foo type=abc host=$host$ earliest=$earliest$ latest=$latest$ |stats values(_time) as _time values(type) as type values(category) as category values(status) as status by host |append [search index=foo sourcetype=foo type=xyz |stats values(type) as type values(type_id) as type_id by host] |stats values(*) as * by host the problem comes when i try to add another map search command to retrieve app value present in 3rd event. so basically following mapping should provide me those result |map search="search index=pqr sourcetype=pqr host=$host$ category=$category$ earliest=-1d latest=now|stats count by app And then this app value is to be searched in one of the lookup files to get some details. i have tried multiple ways to incorporate this into search but no luck. Any help is appreciated.
Hi Team, I am using field aliases as in my sourcetype i have two common fields (dest & dest_ip) which have same values. When i applied field aliases both were reflecting. How to avoid ... See more...
Hi Team, I am using field aliases as in my sourcetype i have two common fields (dest & dest_ip) which have same values. When i applied field aliases both were reflecting. How to avoid duplicate fields Kindly help in this scenario
I'm trying to figure out the percent of successful authentications from out vulnerability scans. There is a field named IP_Auth_Type and if I do a stats count by for that field I get the following: ... See more...
I'm trying to figure out the percent of successful authentications from out vulnerability scans. There is a field named IP_Auth_Type and if I do a stats count by for that field I get the following: Unix Failed  Unix Not Attempted Unix Successful Windows Successful I would like to add all those above-mentioned bullet points; add the Unix Successful and Windows Successful and divide it by the total of all the points.  This is what I have so far: | inputlookup vulnresults.csv | stats sum(Unix Failed) as UnixFailed_sum, sum(Unix Not Attempted) as UnixNotAttempted_sum, sum(Unix Successful) as UnixSuccessful_sum, sum(Windows Successful) as WindowsSuccessful_sum | eval total=UnixFailed_sum + UnixNotAttempted_sum + UnixSuccessful_sum + WindowsSuccessful_sum | eval ratio=(UnixSuccessful_sum + WindowsSuccessful_sum) / (total) | table NA_sum UnixFailed_sum UnixNotAttempted_sum UnixSuccessful_sum WindowsSuccessful_sum total ratio This doesn't bring any result, so any help would be greatly appreciated.