All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Using HF to forward all events to Indexer and external syslog. When using syslog with tcp all processing basically stopped as the queues filled up (and I've adjusted queue sizes already).  I haven't ... See more...
Using HF to forward all events to Indexer and external syslog. When using syslog with tcp all processing basically stopped as the queues filled up (and I've adjusted queue sizes already).  I haven't found much on Internet about this but did try UDP with the thought that is should be "send and forget" as far as the HF is concerned so it shouldn't slow data ingestion down but it still does. I'm not using a props or transforms for the syslog as I want it to send all events.  After bringing the HF up, within a few minutes the queues fill up and everything grinds to a halt. If you look at it from the local MC, you can see there is no resource load on the server and you see a little ingestion occur about every few minutes are so.  The little data that gets to the indexer gets more timestamp skewed.   I'm beating my head on that proverbial rock as this was working fine with tcp for a while and now it isn't working even using UDP. Here is my syslog outputs.conf on the HF: [syslog] defaultGroup = forwarders_syslog maxQueueSize = 10MB [syslog:forwarders_syslog] server = xx.xx.xx.xx:10514 type = udp disabled = 0 priority = <34> timestampformat = %b %e %H:%M:%S useACK=false I should also mention that there is no issue on the syslog server or the indexer, they are not taxed by any metric.  The syslog server is forwarding to another syslog via the Internet and does use tcp for that but since the incoming is written to a file, I don't see how that could impact the syslog receiving data from the HF. Any advice will be appreciated.  I've opened a case with Splunk but they have been less than responsive.
Hey, Need help. Have a client that have been running splunk for a while as root but now whats to run splunk as splunk, and there's no splunk as a user. It's a linux server Can anyone help with st... See more...
Hey, Need help. Have a client that have been running splunk for a while as root but now whats to run splunk as splunk, and there's no splunk as a user. It's a linux server Can anyone help with steps to proceed without running into permission error issues Should I useradd splunk chown -R splunk:splunk /opt/splunk and restart splunk Will appreciate suggestions  
Hi. I am having trouble figuring out how to execute this, although it's probably simple: search 1 | field 1 | join [ search 2 | field 2 ] | table field 1, field 2 each instance of field 1 will re... See more...
Hi. I am having trouble figuring out how to execute this, although it's probably simple: search 1 | field 1 | join [ search 2 | field 2 ] | table field 1, field 2 each instance of field 1 will return multiple values for field 2. i want to table both fields such that every value for field 2 is printed next to its corresponding value for field 1. The left column will have some duplicate values, the right column will have only unique values I want a table that looks like this: FIELD 1 FIELD 2 value A value 1 value A value 2 value A value 3 value B value 1 value B value 2 value C value 1 value C value 2 value C value 3   this is my actual search: index=soe_app_retail sourcetype="vg:hvlm" source="*prd/vpa*" "*NumberOfRules*" | rex field=_raw "poid=(?<field_1>\d+)" | join type=inner uid [ search index=soe_app_retail sourcetype="vg:hvlm" source="*prd/vpa*" "*upper*" |rename message as field_2] | table field_1, field_2   right now i am getting only 1 row for each field_1 value, even though I know there are multiple values for field_2 for each field_1. I think it involves MV expand but I can't figure it out
So I'm trying to chart blocked traffic(IPs) over 7 days... the purpose to help locate beaconing traffic (this has worked at a previous job but im taking it a step further by only wanting to see days ... See more...
So I'm trying to chart blocked traffic(IPs) over 7 days... the purpose to help locate beaconing traffic (this has worked at a previous job but im taking it a step further by only wanting to see days with values.... example: I would want to see results that only show, All days with values... Query works just see alot of days with 0 data Here's my query: index="pan_logs" sourcetype="pan:traffic" dest_zone="Public" src="10.11.16*" action=blocked | chart count(dest) by dest date_wday
Hello, I'm experiencing the following issue on one of my search heads (total of 3): Knowledge bundle size=2608MB exceeds max limit=2000MB. Distributed searches are running against an outdated kno... See more...
Hello, I'm experiencing the following issue on one of my search heads (total of 3): Knowledge bundle size=2608MB exceeds max limit=2000MB. Distributed searches are running against an outdated knowledge bundle. Please remove/disable files from knowledge bundle or increase maxBundleSize in distsearch.conf.   Why is the SH behaving like this when the others have the same config?  
I would like to create a dashboard that has field inputs so that I can share with end users who are not familiar with spunk. I just want them to present two fields that they can just provide the inpu... See more...
I would like to create a dashboard that has field inputs so that I can share with end users who are not familiar with spunk. I just want them to present two fields that they can just provide the input like an IP address and click GO and will get the results back of my particular search query.   If that's possible, what do I have to look into? Helpful if I can get a link as well. Thank you in advance.
Hi everyone, I'm new to Splunk and I try to analyse my router (dd-wrt) syslog. So I installed Splunkon ubuntu and, SA-CIM and TA-Tomato (btw. what means SA- and TA- ?). I found that most of the das... See more...
Hi everyone, I'm new to Splunk and I try to analyse my router (dd-wrt) syslog. So I installed Splunkon ubuntu and, SA-CIM and TA-Tomato (btw. what means SA- and TA- ?). I found that most of the dashboards are showing nothing or very less information. I don't know what else to do to get more than 'no data found'. Mainly I want to analysing incomming attacks, vpn connections and other. Is someone using TA-Tomato or anything else?
Hello community, on my desk, I have a pretty edgy request that is giving me quite a headache. I would need to collect (with | collect) the output of a search in a new sourcetype created dynamical... See more...
Hello community, on my desk, I have a pretty edgy request that is giving me quite a headache. I would need to collect (with | collect) the output of a search in a new sourcetype created dynamically within the search itself. Here you can find a simple ad hoc example: | makeresults | eval letter1="A", letter2="B", letter3="C" | eval variabile="NewSourcetype" | eval _raw=_time + ": " + _raw | collect index=garbage sourcetype=variabile Problem is that the event is stored under  sourcetype=variabile instead of sourcetype=NewSourcetype. Any idea how to manage such a situation? Thanks in advance for your kind support.
Hi Everyone, I'm working on a Splunk dashboard visualisation using a line chart, and I span the data for every 1week.But the line is not consistent if there is no data and I see dots scattered here a... See more...
Hi Everyone, I'm working on a Splunk dashboard visualisation using a line chart, and I span the data for every 1week.But the line is not consistent if there is no data and I see dots scattered here and there. Is there a way to optimise this view. Attached is the screenshot. Thanks   index = "abc" Environment = $environment$ ProcessName=*$task$* LogType = "*" TaskName =* |bucket span=1w _time |stats count(eval(LogMessage = "errorneously")) as Failed_Count, count(eval(LogMessage = "execution")) as Success_Count ,count(eval(LogMessage = "execution2")) as Success_Count1 by _time |eval tot_count= Failed_Count + Success_Count + Success_Count1|eval scount=Success_Count + Success_Count1 | eval succ_per=round((scount/tot_count)*100,0) |timechart span=1w avg(succ_per)      
Hi Splunkers, We are looking for a solution to send the Splunk data to the snowflake schema using DB connect. Anyone has implemented this setup if yes, please let me know the solution here . Thanks... See more...
Hi Splunkers, We are looking for a solution to send the Splunk data to the snowflake schema using DB connect. Anyone has implemented this setup if yes, please let me know the solution here . Thanks in advance.
Hello, I am playing with the data annotation, uploading it to my dashboard from the csv. Is it possible to have the annotation_label dynamic in a way, that I can copy the text out of it? Or perhaps... See more...
Hello, I am playing with the data annotation, uploading it to my dashboard from the csv. Is it possible to have the annotation_label dynamic in a way, that I can copy the text out of it? Or perhaps even click it when it is a link ... Please see the screenshot. Regards, Kamil  
Hello. For once again a noob question. Is it possible to add dropdown inside a panel with a table using javascript ?
Can anyone suggest why the logs are coming up like this? I added the monitoring stanza. Could anyone suggest some troubleshooting steps/solution?   inputs.conf stanza [monitor:///opt/net... See more...
Can anyone suggest why the logs are coming up like this? I added the monitoring stanza. Could anyone suggest some troubleshooting steps/solution?   inputs.conf stanza [monitor:///opt/netmonitor/LOG/*] index = osnix sourcetype = ping_status_log_new crcSalt = <SOURCE>   
Hi, I'm struggling with a simple search. I have multiple events for the same username. I need to count the number of usernames that appeared in those events. I start with just 1 day when there shou... See more...
Hi, I'm struggling with a simple search. I have multiple events for the same username. I need to count the number of usernames that appeared in those events. I start with just 1 day when there should be only 1 username. But this search returns the count of 7, because it counts events, not usernames, even though I put the username field in the count command: index=* policy_name=* | stats count(username)   I tried adding dedup before stats, but it didn't do anything. What am I missing, please?   Thanks, Alina  
I have 3 indexes containing events with IP addresses, index1, index2, and index3. My goal is to return a list of all IP addresses that are present in index1 and see if those had matches with  IP's in... See more...
I have 3 indexes containing events with IP addresses, index1, index2, and index3. My goal is to return a list of all IP addresses that are present in index1 and see if those had matches with  IP's in index 2 and index 3 3 different indexes with 3 different IP field names: index1 , src_ip index2, ipaddr index3, ip Any help would be appreciated, thank you.
I currently have the Splunk Add-on for Microsoft Cloud Services installed on a heavy forwarder, pulling logs from blob storage. I'm wondering how or what manages the checkpoint files located under ... See more...
I currently have the Splunk Add-on for Microsoft Cloud Services installed on a heavy forwarder, pulling logs from blob storage. I'm wondering how or what manages the checkpoint files located under modinputs. They do not seem to be rotating out or deleting. Even after deleting an input from the GUI the checkpoint folder still remains. Thanks in advance.
Error in 'SearchParser': The search specifies a macro 'summariesonly' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macr... See more...
Error in 'SearchParser': The search specifies a macro 'summariesonly' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information. How to enable the macro 'summariesonly'?
Hello! I have an environment with about 200 machines, all Windows Servers. All servers are sending TCP information through port 9997 directly to my Heavy Forwarder, all information is allocated in... See more...
Hello! I have an environment with about 200 machines, all Windows Servers. All servers are sending TCP information through port 9997 directly to my Heavy Forwarder, all information is allocated in the "Windows" index    What happens is that about 1-2x a day, the logs sent by Universal Forwarders stop from all machines leaving the Windows index blank. All other data that do not arrive through TCP 9997 are normal, such as some scripts that bring other types of information and save in other indexes. The problem is only solved when Splunk is restarted in Heavy Forwarder Trying to diagnose the problem, the only thing I could find is this message on all servers with Universal Forwarder installed 02-16-2022 15:20:51.293 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 82200 seconds Has anyone gone through something similar, or can help me try to identify what is happening? Remembering that the Log in Heavy Forwader, doesn't bring me anything relevant Thanks in advance!
Hey guys. I have been trying to make a compliance/noncompliance list: I have a big search that will table all the data i need. I tried using eval case to assign compliance/noncompliance to the ho... See more...
Hey guys. I have been trying to make a compliance/noncompliance list: I have a big search that will table all the data i need. I tried using eval case to assign compliance/noncompliance to the hosts however it is not working. There could be multiple problems. The search is this:     | rex field=_raw "(Available Updates)\s+(?<AvailableUpdates>.+)" | rex field=_raw "(.Net Version is)\s+(?<DotNetVersion>.+)" | rex field=_raw "(Powershell Version is)\s+(?<PowershellVersion>.+)" | rex field=_raw "(Was able to resolved google.dk)\s+(?<DNS>.+)" | rex field=_raw "(Firewall's)\s+(?<AllFirewalls>.+)" | rex field=_raw "(Commvault)\s+(?<Commvault>.+)" | rex field=_raw "(Snow)\s+(?<Snow>.+)" | rex field=_raw "(Symantec)\s+(?<Symantec>.+)" | rex field=_raw "(Splunk Forwarder)\s+(?<Splunk>.+)" | rex field=_raw "(SNMP Service)\s+(?<SNMP>.+)" | rex field=_raw "(Zabbix Agent Version)\s+(?<Zabbix4>.+)" | rex field=_raw "(Zabbix Agent2)\s+(?<Zabbix2>.+)" | rex field=_raw "(VMware)\s+(?<VMware>.+)" | rex field=_raw "(Backup route)\s+(?<BackupRoute>.+)" | rex field=_raw "(Metric)\s+(?<Metric>.+)" | rex field=_raw "(IPconfig)\s+(?<IPconfig>.+)" | rex field=_raw "(DeviceID VolumeName)\s+(?<Storage>.+)" | rex field=_raw "(Memory)\s+(?<Memory>.+)" | rex field=_raw "(Amount of Cores)\s+(?<CPU>.+)" | rex field=_raw "(is Licensed with)\s+(?<WindowsLicense>.+)" | rex field=_raw "(Running Microsoft)\s+(?<OS>.+)" | rex field=_raw "(OS Uptime is)\s+(?<Uptime>.+)" | join type=outer host[|inputlookup Peer_Dashboard_Comments.csv] | stats latest(AvailableUpdates) as AvailableUpdates, latest(DotNetVersion) as DotNetVersion, latest(PowershellVersion) as PowershellVersion, latest(DNS) as DNS, latest(AllFirewalls) as AllFirewalls, latest(Commvault) as Commvault, latest(Snow) as Snow, latest(Symantec) as Symantec, latest(Splunk) as Splunk, latest(SNMP) as SNMP, latest(Zabbix4) as Zabbix4, latest(Zabbix2) as Zabbix2, latest(VMware) as VMware, latest(BackupRoute) as BackupRoute, latest(Metric) as Metric, latest(IPconfig) as IPconfig, latest(Storage) as Storage, latest(Memory) as Memory, latest(CPU) as CPU, latest(WindowsLicense) as WindowsLicense, latest(OS) as OS, latest(Uptime) as Uptime, latest(Comments) as Comments by host | fillnull value="-" | eval status=case(AvailableUpdates="= 0" AND NOT match(DotNetVersion,"Not!") AND match(PowershellVersion,"5.1") AND DNS="142.250.179.195" AND AllFirewalls="are disabled" AND match(Commvault,"is Installed") AND match(Snow,"is Installed") AND match(Symantec,"is Installed") AND match(Splunk,"is Installed") AND match(SNMP,"is installed") AND match(Zabbix4,"is installed") AND match(Zabbix2,"is installed") AND match(VMware,"is Installed") AND match(BackupRoute,"was found") AND match(Metric,"is - Ethernet") AND match(WindowsLicense,"Windows") AND (match(OS,"2016") OR match(OS,"2019")),"Compliant",1=1,"noncompliant") | stats distinct_count(Compliant) as Compliant       It doesnt fail but reports back with a result of 0 compliant hosts. If i try to list noncompliant hosts it is also 0. I have a AND (match(OS,"2016") OR match(OS,"2019")) in there. Is that a OK way of matching a single field to 2 values? There is also a "AND NOT match(DotNetVersion" in the beginning. Is it okay to use both match and NOT match in the same case? Anything im missing here?
My output format is 20220129054235.496380-300 I need to convert the value in bold to normal and find the difference of that to now() ie., epoch time.  this will give the uptime in days