All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, I'm working on a Splunk dashboard visualisation using a line chart, and I span the data for every 1week.But the line is not consistent if there is no data and I see dots scattered here a... See more...
Hi Everyone, I'm working on a Splunk dashboard visualisation using a line chart, and I span the data for every 1week.But the line is not consistent if there is no data and I see dots scattered here and there. Is there a way to optimise this view. Attached is the screenshot. Thanks   index = "abc" Environment = $environment$ ProcessName=*$task$* LogType = "*" TaskName =* |bucket span=1w _time |stats count(eval(LogMessage = "errorneously")) as Failed_Count, count(eval(LogMessage = "execution")) as Success_Count ,count(eval(LogMessage = "execution2")) as Success_Count1 by _time |eval tot_count= Failed_Count + Success_Count + Success_Count1|eval scount=Success_Count + Success_Count1 | eval succ_per=round((scount/tot_count)*100,0) |timechart span=1w avg(succ_per)      
Hi Splunkers, We are looking for a solution to send the Splunk data to the snowflake schema using DB connect. Anyone has implemented this setup if yes, please let me know the solution here . Thanks... See more...
Hi Splunkers, We are looking for a solution to send the Splunk data to the snowflake schema using DB connect. Anyone has implemented this setup if yes, please let me know the solution here . Thanks in advance.
Hello, I am playing with the data annotation, uploading it to my dashboard from the csv. Is it possible to have the annotation_label dynamic in a way, that I can copy the text out of it? Or perhaps... See more...
Hello, I am playing with the data annotation, uploading it to my dashboard from the csv. Is it possible to have the annotation_label dynamic in a way, that I can copy the text out of it? Or perhaps even click it when it is a link ... Please see the screenshot. Regards, Kamil  
Hello. For once again a noob question. Is it possible to add dropdown inside a panel with a table using javascript ?
Can anyone suggest why the logs are coming up like this? I added the monitoring stanza. Could anyone suggest some troubleshooting steps/solution?   inputs.conf stanza [monitor:///opt/net... See more...
Can anyone suggest why the logs are coming up like this? I added the monitoring stanza. Could anyone suggest some troubleshooting steps/solution?   inputs.conf stanza [monitor:///opt/netmonitor/LOG/*] index = osnix sourcetype = ping_status_log_new crcSalt = <SOURCE>   
Hi, I'm struggling with a simple search. I have multiple events for the same username. I need to count the number of usernames that appeared in those events. I start with just 1 day when there shou... See more...
Hi, I'm struggling with a simple search. I have multiple events for the same username. I need to count the number of usernames that appeared in those events. I start with just 1 day when there should be only 1 username. But this search returns the count of 7, because it counts events, not usernames, even though I put the username field in the count command: index=* policy_name=* | stats count(username)   I tried adding dedup before stats, but it didn't do anything. What am I missing, please?   Thanks, Alina  
I have 3 indexes containing events with IP addresses, index1, index2, and index3. My goal is to return a list of all IP addresses that are present in index1 and see if those had matches with  IP's in... See more...
I have 3 indexes containing events with IP addresses, index1, index2, and index3. My goal is to return a list of all IP addresses that are present in index1 and see if those had matches with  IP's in index 2 and index 3 3 different indexes with 3 different IP field names: index1 , src_ip index2, ipaddr index3, ip Any help would be appreciated, thank you.
I currently have the Splunk Add-on for Microsoft Cloud Services installed on a heavy forwarder, pulling logs from blob storage. I'm wondering how or what manages the checkpoint files located under ... See more...
I currently have the Splunk Add-on for Microsoft Cloud Services installed on a heavy forwarder, pulling logs from blob storage. I'm wondering how or what manages the checkpoint files located under modinputs. They do not seem to be rotating out or deleting. Even after deleting an input from the GUI the checkpoint folder still remains. Thanks in advance.
Error in 'SearchParser': The search specifies a macro 'summariesonly' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macr... See more...
Error in 'SearchParser': The search specifies a macro 'summariesonly' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information. How to enable the macro 'summariesonly'?
Hello! I have an environment with about 200 machines, all Windows Servers. All servers are sending TCP information through port 9997 directly to my Heavy Forwarder, all information is allocated in... See more...
Hello! I have an environment with about 200 machines, all Windows Servers. All servers are sending TCP information through port 9997 directly to my Heavy Forwarder, all information is allocated in the "Windows" index    What happens is that about 1-2x a day, the logs sent by Universal Forwarders stop from all machines leaving the Windows index blank. All other data that do not arrive through TCP 9997 are normal, such as some scripts that bring other types of information and save in other indexes. The problem is only solved when Splunk is restarted in Heavy Forwarder Trying to diagnose the problem, the only thing I could find is this message on all servers with Universal Forwarder installed 02-16-2022 15:20:51.293 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 82200 seconds Has anyone gone through something similar, or can help me try to identify what is happening? Remembering that the Log in Heavy Forwader, doesn't bring me anything relevant Thanks in advance!
Hey guys. I have been trying to make a compliance/noncompliance list: I have a big search that will table all the data i need. I tried using eval case to assign compliance/noncompliance to the ho... See more...
Hey guys. I have been trying to make a compliance/noncompliance list: I have a big search that will table all the data i need. I tried using eval case to assign compliance/noncompliance to the hosts however it is not working. There could be multiple problems. The search is this:     | rex field=_raw "(Available Updates)\s+(?<AvailableUpdates>.+)" | rex field=_raw "(.Net Version is)\s+(?<DotNetVersion>.+)" | rex field=_raw "(Powershell Version is)\s+(?<PowershellVersion>.+)" | rex field=_raw "(Was able to resolved google.dk)\s+(?<DNS>.+)" | rex field=_raw "(Firewall's)\s+(?<AllFirewalls>.+)" | rex field=_raw "(Commvault)\s+(?<Commvault>.+)" | rex field=_raw "(Snow)\s+(?<Snow>.+)" | rex field=_raw "(Symantec)\s+(?<Symantec>.+)" | rex field=_raw "(Splunk Forwarder)\s+(?<Splunk>.+)" | rex field=_raw "(SNMP Service)\s+(?<SNMP>.+)" | rex field=_raw "(Zabbix Agent Version)\s+(?<Zabbix4>.+)" | rex field=_raw "(Zabbix Agent2)\s+(?<Zabbix2>.+)" | rex field=_raw "(VMware)\s+(?<VMware>.+)" | rex field=_raw "(Backup route)\s+(?<BackupRoute>.+)" | rex field=_raw "(Metric)\s+(?<Metric>.+)" | rex field=_raw "(IPconfig)\s+(?<IPconfig>.+)" | rex field=_raw "(DeviceID VolumeName)\s+(?<Storage>.+)" | rex field=_raw "(Memory)\s+(?<Memory>.+)" | rex field=_raw "(Amount of Cores)\s+(?<CPU>.+)" | rex field=_raw "(is Licensed with)\s+(?<WindowsLicense>.+)" | rex field=_raw "(Running Microsoft)\s+(?<OS>.+)" | rex field=_raw "(OS Uptime is)\s+(?<Uptime>.+)" | join type=outer host[|inputlookup Peer_Dashboard_Comments.csv] | stats latest(AvailableUpdates) as AvailableUpdates, latest(DotNetVersion) as DotNetVersion, latest(PowershellVersion) as PowershellVersion, latest(DNS) as DNS, latest(AllFirewalls) as AllFirewalls, latest(Commvault) as Commvault, latest(Snow) as Snow, latest(Symantec) as Symantec, latest(Splunk) as Splunk, latest(SNMP) as SNMP, latest(Zabbix4) as Zabbix4, latest(Zabbix2) as Zabbix2, latest(VMware) as VMware, latest(BackupRoute) as BackupRoute, latest(Metric) as Metric, latest(IPconfig) as IPconfig, latest(Storage) as Storage, latest(Memory) as Memory, latest(CPU) as CPU, latest(WindowsLicense) as WindowsLicense, latest(OS) as OS, latest(Uptime) as Uptime, latest(Comments) as Comments by host | fillnull value="-" | eval status=case(AvailableUpdates="= 0" AND NOT match(DotNetVersion,"Not!") AND match(PowershellVersion,"5.1") AND DNS="142.250.179.195" AND AllFirewalls="are disabled" AND match(Commvault,"is Installed") AND match(Snow,"is Installed") AND match(Symantec,"is Installed") AND match(Splunk,"is Installed") AND match(SNMP,"is installed") AND match(Zabbix4,"is installed") AND match(Zabbix2,"is installed") AND match(VMware,"is Installed") AND match(BackupRoute,"was found") AND match(Metric,"is - Ethernet") AND match(WindowsLicense,"Windows") AND (match(OS,"2016") OR match(OS,"2019")),"Compliant",1=1,"noncompliant") | stats distinct_count(Compliant) as Compliant       It doesnt fail but reports back with a result of 0 compliant hosts. If i try to list noncompliant hosts it is also 0. I have a AND (match(OS,"2016") OR match(OS,"2019")) in there. Is that a OK way of matching a single field to 2 values? There is also a "AND NOT match(DotNetVersion" in the beginning. Is it okay to use both match and NOT match in the same case? Anything im missing here?
My output format is 20220129054235.496380-300 I need to convert the value in bold to normal and find the difference of that to now() ie., epoch time.  this will give the uptime in days
Hello,    I have recently upgraded from Splunk 7 to Splunk 8.2.4. After the upgrade, I noticed that some transform commands such as chart or stats do not work in smart and fast mode.   For in... See more...
Hello,    I have recently upgraded from Splunk 7 to Splunk 8.2.4. After the upgrade, I noticed that some transform commands such as chart or stats do not work in smart and fast mode.   For instance: index=main | chart count by host returns the expected results in detailed mode. It returns 0 results in smart and fast mode.   Ps: The transaction command still works, but I have to select the fields I want with fields in place of table. In Splunk 7 table works too.   I would like that stats and chart commands still work in fast search mode, as it happened in Splunk 7. Could you help me to revert the Splunk 7 working mode? Thank you very much Kind Regards Marco
Hi All, We have a python code to ingest MongoDB logs into splunk and we are successfully ingesting logs from old servers. Now there is a requirement to ingest mongodb logs into splunk from new se... See more...
Hi All, We have a python code to ingest MongoDB logs into splunk and we are successfully ingesting logs from old servers. Now there is a requirement to ingest mongodb logs into splunk from new servers. mongodb://USER:PASS@SERVER1:27017,SERVER2:27017/abc_analytics?replicaSet=mongo-replica</description> This is how logs are ingested, now when I try the same for new servers, I get "Invalid Key error" NOTE: 1) Firewall connectivity is working fine 2) MongoDB team says the password is correct The password that is used, is that given by splunk team or the mongodb team? If it is MongoDB team, where they need to check the password and the user id? internal logs: 02-17-2022 18:45:50.916 +1100 WARN Application - Invalid key in stanza [abc_analytics://XXX-XXX-XXX] in /opt/splunk/etc/deployment-apps/modinput_abc_analytics_mongodb-XXX-XXX-XXX/local/inputs.conf, line 34: mongodb_uri (value: mongodb://Mongodbservername1.local:27017,Mongodbservername2.local:27017/abc_analytics?replicaSet=mongo-replica).\n
Hello All, I was extracting some volume data for PE testing from prod systems, using following query  I am expecting to get stats from 9AM to 6PM event counts with respect to proxy names. but fol... See more...
Hello All, I was extracting some volume data for PE testing from prod systems, using following query  I am expecting to get stats from 9AM to 6PM event counts with respect to proxy names. but following code creating stats for entire day please help me to remove these extra data.  Query  index= index_Name environmentName= Env_name clientAppName="App_Name" | eval eventHour=strftime(_time,"%H") | where eventHour<18 AND eventHour>=9 | timechart count span=60m by proxyName result : TIme Proxy1 proxy2  2022-02-16 06:00 0 0 2022-02-16 07:00 0 0 2022-02-16 08:00 0 0 2022-02-16 09:00 27 34
Could you please tell me about the following? If I want to limit memory usage for a search, is it correct to think that I should set the following? ===== [search] enable_memory_tracker=true searc... See more...
Could you please tell me about the following? If I want to limit memory usage for a search, is it correct to think that I should set the following? ===== [search] enable_memory_tracker=true search_process_memory_usage_threshold=10000 search_process_memory_usage_percentage_threshold=60 ===== ※ If either value of "10000 (MB)" or "60 (%)" is reached, the operation is forcibly terminated. Is it correct to understand that the above setting is for all searches including ad hoc searches? If I want to enable the settings for all app searches, is it safe to add them to limits.conf below? $SPLUNK_HOME/etc/system/local/limits.conf ※Set to $SPLUNK _ HOME/etc/apps/App name/local/limits.conf to search for individual apps. Am I correct in thinking that the above limits.conf settings should be set for both SearchHead and Indexer?
Hello,  I am looking at creating a dashboard which shows us the least visited domains in the last 30 days. I also want to setup a email alert every hour of the most and least visited domain. Unfort... See more...
Hello,  I am looking at creating a dashboard which shows us the least visited domains in the last 30 days. I also want to setup a email alert every hour of the most and least visited domain. Unfortunately, I don't have any domain tools / applications installed and want to create a search for the time being.  Usecase:  URL Count in the last 30 days within a certain location Only see the top level domains Setup email Alert every hour I have currently got this setup but it doesn't work properly: index=proxy OR index=web gateway src_country="country" | regex url="(?<TLD>\.\w+?)(?:$|\/)" | search ([|inputlookup of users.csv]) | stats count as Total by url   Thanks, Mark Nicholls
    index="***********" sourcetype="**********" (host="*") | rex field=_raw "(Available Updates)\s+(?<AvailableUpdates>.+)" | table _time _raw host AvailableUpdates | stats latest(AvailableUpdate... See more...
    index="***********" sourcetype="**********" (host="*") | rex field=_raw "(Available Updates)\s+(?<AvailableUpdates>.+)" | table _time _raw host AvailableUpdates | stats latest(AvailableUpdates) as AvailableUpdates by host   Hey guys. So I have a search that gives a table as such: Host __________________ AvailableUpdates Host1_________________ = 21 Host2__________________= 0 Host3__________________= 5 Host4__________________= 0 Host5__________________ null I am looking to make a piechart with 2 different "values" 1 "value" is all the "= 0" in green, and the rest in red. Can't quite figure out how to sort this.  Tyvm
Hi all, I want a result containing value= '0' in column without using the " chart " command Thank you.    
Hi,  I did an alert that should run every day at the same time, at the end of the alert I used "collect" ->      | collect index="index_name"        every day the job is running (it takes 1~ ... See more...
Hi,  I did an alert that should run every day at the same time, at the end of the alert I used "collect" ->      | collect index="index_name"        every day the job is running (it takes 1~ min) but I don't see the new events after the job is finished...  how long is it supposed to take until I will see it on the index?   this is the search I do (with filter last 24h)  ->       index="index_name"