All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can Splunk support staff in one region access customer data residing in Splunk cloud instance of another region ? For e.g can support staff in US region access data in an index of a Splunk indexer w... See more...
Can Splunk support staff in one region access customer data residing in Splunk cloud instance of another region ? For e.g can support staff in US region access data in an index of a Splunk indexer which is hosted in Sydney, AU ?
Hi,   I want to know a ballpark figure or a pricing calculator for Splunk Cloud. Any given scenario, this is what is needed: infrastructure-based:  For a standard/medium sized enterprise, what is... See more...
Hi,   I want to know a ballpark figure or a pricing calculator for Splunk Cloud. Any given scenario, this is what is needed: infrastructure-based:  For a standard/medium sized enterprise, what is the typical infrastructure and the price associated? Ingest-model: For a medium scale enterprise, what is the license cost if we ingest 100GB/day, 500GB/day, 1TB/day Additional cost consideration - Enterprise Security + Splunk Cloud  
Some one please help me here.. i am trying to monitor /var/log/audit/audit.log using universal forwarder and sending it to indexer.. but logs are not being sent to indexer..here is the log i m seein... See more...
Some one please help me here.. i am trying to monitor /var/log/audit/audit.log using universal forwarder and sending it to indexer.. but logs are not being sent to indexer..here is the log i m seeing in splunkd of forwarder    08-06-2020 13:48:17.728 +0530 DEBUG TailingProcessor - Item '/var/log/audit/audit.log' matches stanza: /var/log/audit/audit.log. 08-06-2020 13:48:17.728 +0530 DEBUG TailingProcessor - Storing config '/var/log/audit/audit.log' for app ''. 08-06-2020 13:48:17.728 +0530 DEBUG TailingProcessor - Entry is associated with 1 configuration(s). 08-06-2020 13:48:17.728 +0530 DEBUG TailReader - Will attempt to read file: /var/log/audit/audit.log. 08-06-2020 13:48:17.730 +0530 DEBUG TailReader - Got classified_sourcetype='linux_audit' and classified_charset='UTF-8'. 08-06-2020 13:48:17.730 +0530 DEBUG WatchedFile - Storing pending metadata for file=/var/log/audit/audit.log, sourcetype=linux_audit, charset=UTF-8 08-06-2020 13:48:17.730 +0530 DEBUG WatchedFile - setting trailing nulls to false via 'true' or 'false' from conf' 08-06-2020 13:48:17.730 +0530 DEBUG WatchedFile - Loading state from fishbucket. 08-06-2020 13:48:17.731 +0530 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::/var/log/audit/audit.log|host::xxx|linux_audit|3 ... 08-06-2020 13:48:17.731 +0530 DEBUG WatchedFile - Reading for plain initCrc... 08-06-2020 13:48:17.731 +0530 DEBUG WatchedFile - initcrc has changed to: 0x153ce0cdaa107eee. 08-06-2020 13:48:17.731 +0530 DEBUG WatchedFile - Record found, will advance file by offset=12920 initcrc=0x153ce0cdaa107eee. 08-06-2020 13:48:17.731 +0530 DEBUG WatchedFile - Creating new pipeline input channel with channel id: 4. 08-06-2020 13:48:17.732 +0530 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::/var/log/audit/audit.log|host::xxx|linux_audit|4 ... 08-06-2020 13:48:17.732 +0530 DEBUG TailReader - About to read data (Opening file: /var/log/audit/audit.log). 08-06-2020 13:48:17.732 +0530 DEBUG WatchedFile - seeking /var/log/audit/audit.log to off=12920 08-06-2020 13:48:17.732 +0530 DEBUG WatchedFile - Reached EOF: /var/log/audit/audit.log (read 0 bytes) 08-06-2020 13:48:17.732 +0530 DEBUG TailReader - Hit EOF immediately. 08-06-2020 13:48:17.732 +0530 DEBUG TailReader - Have definitely hit EOF. 08-06-2020 13:48:17.732 +0530 DEBUG TailReader - Finished reading file='/var/log/audit/audit.log' in tailreader0 thread, disposition=ACKNOWLEDGE_CHANGE, deferredBy=0.000 08-06-2020 13:48:17.732 +0530 DEBUG TailReader - Returning disposition=ACKNOWLEDGE_CHANGE for file=/var/log/audit/audit.log 08-06-2020 13:48:17.732 +0530 DEBUG TailReader - Start reading file="/opt/splunkforwarder/var/log/splunk/splunkd.log" in tailreader0 thread  
Hi, There is issue in version 8.0.4.1 with user cookie/session being broken. Whenever I load Splunk, I do receive 404 on this address: http://localhost:8000/en-GB/splunkd/__raw/services/authenticat... See more...
Hi, There is issue in version 8.0.4.1 with user cookie/session being broken. Whenever I load Splunk, I do receive 404 on this address: http://localhost:8000/en-GB/splunkd/__raw/services/authentication/users/undefined?output_mode=json&_=1596704426231 As you can see used == undefined. After some time this issue goes away and returns back. Temporary solution: Just run Splunk in anonymous mode. Unfortunately, I didn't found a way how to reproduce it but I hope Splunk will fix that in new versions.
Hi,  I have a lookup tables with user names (ftp_users.csv). Every day I'm getting one line from a particular system with users with FTP permissions, I want to compare between the lookup file (whic... See more...
Hi,  I have a lookup tables with user names (ftp_users.csv). Every day I'm getting one line from a particular system with users with FTP permissions, I want to compare between the lookup file (which contains the allowed FTP users) with the line I'm getting from the system and to alert when there is no match.  The line form the system looks similar to the below row: users=user1, user2, user3, user4, user5 I'll appreciate any help with that, Thank you. 
I need to take one peer down for maintenance, so i do splunk stop on it. cluster handles and brings cluster back to valid state with 1 indexer showing "down" or "Stopped" now the dashboards have al... See more...
I need to take one peer down for maintenance, so i do splunk stop on it. cluster handles and brings cluster back to valid state with 1 indexer showing "down" or "Stopped" now the dashboards have all the data showing up, but there is a small yellow triangle indicating one peer is not searchable. so i removed this peer from cluster master , now the error is gone. i would be re-adding this after activity. so would like to understand what happens when i re-add it... will cluster master identify this peer was removed and was re-added ? will it treat it like it was down and it came back up now ?  
Hi, I have several log files that I´m "batch indexing". for example:   file01.log file02.log file03.log file04.log   Now I´m searching if an field equals an value like State=4 And now I just w... See more...
Hi, I have several log files that I´m "batch indexing". for example:   file01.log file02.log file03.log file04.log   Now I´m searching if an field equals an value like State=4 And now I just want to do another search that searches only in this source-files where the State was 4. I tried this:   index=test sourcetype=XY State=4 | stats count by source |map search="search index=test sourcetype=XY source=$source$ |stats values(cpu) by _time "   but I´m getting no results. If I run the first search without the map section, I´m getting the source-files as an List
How do I combine a field with similar value (where one value might or might not exist in one of the field) and use stats by with an uncommon field? I have the search below  | tstats prestats=t summa... See more...
How do I combine a field with similar value (where one value might or might not exist in one of the field) and use stats by with an uncommon field? I have the search below  | tstats prestats=t summariesonly=t allow_old_summaries=t count from datamodel=Network_Resolution.DNS by "DNS.query" | rename DNS.query as query1 | tstats append=t prestats=t summariesonly=t allow_old_summaries=t count from datamodel=Web.Web by _time Web.dest Web.category | rename Web.dest as query2 | rename Web.category as category | eval query=coalesce(query1,query2) | stats count by query category | fillnull value=NULL   Web.dest Web.category DNS.query abc.com News abc.com     dfe.com   Results: query category abc.com news   Expected Results: query category abc.com news dfe.com NULL
HI fellow splunkers, I am trying to count a distinct number of applications from our logs. Query is: Index=something app!="null" | dedup app | stats count by app i am looking to use the single val... See more...
HI fellow splunkers, I am trying to count a distinct number of applications from our logs. Query is: Index=something app!="null" | dedup app | stats count by app i am looking to use the single value and trendline to show the number of "apps" and whether there has been a decrease or increase.
Hi, I have a stat on eventtype like this index=xyz | stats count by eventtype This query generates: All_logs  = 14 Error = 2 Login = 4 Auth = 8 Where All_logs is also an eventtype which enco... See more...
Hi, I have a stat on eventtype like this index=xyz | stats count by eventtype This query generates: All_logs  = 14 Error = 2 Login = 4 Auth = 8 Where All_logs is also an eventtype which encomapsses all events: Error, Login and Auth How can I rewrite this query so I will see count of eventtype excluding All_logs and Login events
How can I Troubleshoot playbook issue where the wrong raw log is being included in the ticket. For example, where ticket was created for source IP 10.xx.x.xxx and destination IP 10.x.x.x- however th... See more...
How can I Troubleshoot playbook issue where the wrong raw log is being included in the ticket. For example, where ticket was created for source IP 10.xx.x.xxx and destination IP 10.x.x.x- however the raw log was for source IP 10.35.41.10, and destination IP was 10.1.3.7.
This is a custom app available on Splunk enterprise instance and it needs to be moved to Splunk cloud instance. Both are different instances.  Looking for help on the process/steps that needs to be ... See more...
This is a custom app available on Splunk enterprise instance and it needs to be moved to Splunk cloud instance. Both are different instances.  Looking for help on the process/steps that needs to be performed.
When parsing the email message body for inclusion in the ticket in Jira, parsing fails on special characters or non-ASCII text. How can I update function in Phantom to properly handle the message bod... See more...
When parsing the email message body for inclusion in the ticket in Jira, parsing fails on special characters or non-ASCII text. How can I update function in Phantom to properly handle the message body in cases where the error is being thrown.  <error in expanding custom_function_1:custom_function:C1_email>  
How can I enrich the endpoint tickets, where the ticket is for a MacOS host, lookup the host in Jamf and return the following fields: Site, Username, and Full Name. Expected results for 'Site' includ... See more...
How can I enrich the endpoint tickets, where the ticket is for a MacOS host, lookup the host in Jamf and return the following fields: Site, Username, and Full Name. Expected results for 'Site' include "NA - Retail" and "NA - Corp" among others.
Please help me with the steps that need to be taken during this process.
I am trying to download Splunk Enterprise for Windows (64 bit) from splunk.com. When I click 'Download Now', it says site can't be reached. It seems the link is broken. https://www.splunk.com/eula/s... See more...
I am trying to download Splunk Enterprise for Windows (64 bit) from splunk.com. When I click 'Download Now', it says site can't be reached. It seems the link is broken. https://www.splunk.com/eula/splunk/8.0.5?product=splunk&version=8.0.5&platform=Windows&redirecturl=/en_us/download/splunk-enterprise/thank-you-enterprise.html Please help suggest.    
I have SAI/ Splunk App for Infrastructure automatically detecting new entities. But as I use VMs and Containers, it detects many entities that have a short lifetime, usually they deleted after a day... See more...
I have SAI/ Splunk App for Infrastructure automatically detecting new entities. But as I use VMs and Containers, it detects many entities that have a short lifetime, usually they deleted after a day or 2. In the UI it tags them as state=inactive, because there are no new events. I would like to remove them from SAI a few days after they go inactive, to clean up. Can it be done and can it be automated ?
Getting the following error on my Splunk Enterprise Server and I'm not able to get usage.log info.  Can anyone point me in the right direction?   Root Cause(s): The monitor input cannot produce da... See more...
Getting the following error on my Splunk Enterprise Server and I'm not able to get usage.log info.  Can anyone point me in the right direction?   Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Last 50 related messages: 08-05-2020 16:05:22.039 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 08-05-2020 16:05:17.033 -0500 INFO TailReader - ...continuing. 08-05-2020 16:05:12.033 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 08-05-2020 16:05:06.899 -0500 INFO TailReader - File descriptor cache is full (1000), trimming...
Will Universal Forwarder v7.3.1.1 run on Ubuntu 20.04? If not, is there a way I can download Universal Forwarder v7.3.5 to match my environment? * (All I see offered is Universal Forwarder v8.0.5.)
I want to compare (OWNER)(TABLE_NAME) to (OWNER_New)(TABLE_NAME_New). And once the value matched then want to find difference between  Size_In_MB_new and Size_In_MB. Catch is i do not want to only c... See more...
I want to compare (OWNER)(TABLE_NAME) to (OWNER_New)(TABLE_NAME_New). And once the value matched then want to find difference between  Size_In_MB_new and Size_In_MB. Catch is i do not want to only compare the value of (OWNER)(TABLE_NAME) in same row, i want it to search for the same value in all row of (OWNER_New)(TABLE_NAME_New) and if match found then perform difference of desired fields.   OWNER TABLE_NAME Size_In_MB OWNER_New TABLE_NAME_New Size_In_MB_New MZOWNER SYSTEM_LOG 121620 RB_CUSTOM IPGINVOICETRANSACTIONARCHIVE 158115 GENEVA_ADMIN IPGEMSV5US 89920 MZOWNER SYSTEM_LOG 121620 GENEVA_ADMIN IPGEMSV7US 53696 GENEVA_ADMIN IPGEMSV5US 89920 GENEVA_ADMIN EVENTMAPPINGSETVALUE 38656 GENEVA_ADMIN IPGEMSV7US 53696 GENEVA_ADMIN IPGEMSV5GB 14528 GENEVA_ADMIN EVENTMAPPINGSETVALUE 38720 RB_CUSTOM IPGINVOICEGROUPARCHIVE 13414 RB_CUSTOM IPGNONINVOICETRANARCHIVE 31617 GENEVA_ADMIN IPGEMSV7GB 12352 GENEVA_ADMIN IPGEMSV5GB 14528 GENEVA_ADMIN IPGEMSV7IT 11712 RB_CUSTOM IPGINVOICEGROUPARCHIVE 13606 GENEVA_ADMIN IPGEMSV7MY 10688 GENEVA_ADMIN IPGEMSV7GB 12352 GENEVA_ADMIN CUSTPRODUCTCHARGE 10378 GENEVA_ADMIN IPGEMSV7IT 11712