All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have hotel bookings created in March 2020 but check-in dates will be after March 2020. How to see future bookings (Check-in date) in splunk for each month from April-2020 until Dec-2021. I have ... See more...
I have hotel bookings created in March 2020 but check-in dates will be after March 2020. How to see future bookings (Check-in date) in splunk for each month from April-2020 until Dec-2021. I have set the timestamp (input date function) based on Check-in date but could not see results.
Hi Team, My question is i have antivirus events and firewall traffic and i want to run antivirus search as a subsearch with keyword "trojan", take values like ip and user information from that sub... See more...
Hi Team, My question is i have antivirus events and firewall traffic and i want to run antivirus search as a subsearch with keyword "trojan", take values like ip and user information from that subsearch and then pass those two fields to main search of firewall to see at that time of detection whether traffic was present for that ip or not and what was the username field of firewall and antivirus? My search is: index=cisco_asa earliest=-15m [search index=bitdefender "trojan" earliest=-15m | fields src_ip, user | rename src_ip as dest_ip | rename user as bitdefender_user] | stats values(dest_ip), values(dest_port), values(url), values(user) as firewall_user, values(bitdefender_user) by src_ip Now my challenge is after running the above query I am not getting any results but when I will run below query after removing bitdefender_user field, I am getting results but without getting bitdefender user name. I want to see both firewall as well as bitdefender username name field in the output, how to achieve that result:- index=cisco_asa earliest=-15m [search index=bitdefender "trojan" earliest=-15m | fields src_ip | rename src_ip as dest_ip] | stats values(dest_ip), values(dest_port), values(url), values(user) by src_ip Just for information username field present in firewall and bitdefender is "user"
Hi, Is it possible to change the 'name' of the value in a pie chart displayed on a geostats map? The below example shows the piechart. The blue 1 indicate that there are 4 devices with that status... See more...
Hi, Is it possible to change the 'name' of the value in a pie chart displayed on a geostats map? The below example shows the piechart. The blue 1 indicate that there are 4 devices with that status. One device with status 14 and 1 with status 3. I have been trying the change the numbers into a readable status. 1=UP,3=WARNING,14=Critical but for some reason I don't seem to make it possible. This is the SPL creating the worldmap index="xxx" Splunk_Node=true | stats latest(Latitude) as Latitude, latest(Longitude) as Longitude, latest(Node_Status) as Node_Status by NodeID | geostats count(NodeID) by Node_Status latfield=Latitude longfield=Longitude Been trying the below. Added this line and change the SPL from Node_Status to NodeStatus but then the map stays empty. | eval NodeStatus = if(NodeStatus=1,"UP",if(NodeStatus=2,"DOWN",if(NodeStatus=3,"WARNING",if(NodeStatus=14,"CRITICAL",isnull)))) `comment("Change the status number to something readable")` Someone any thoughts on this?
pie chart is showing values in percentage out side along with field value separated by comma. is it possible to separate it by brackets. like value,10%, but i want it to be like value(10%) ... See more...
pie chart is showing values in percentage out side along with field value separated by comma. is it possible to separate it by brackets. like value,10%, but i want it to be like value(10%) I'm using <"option name ="charting.chart.showpercent">true</option>
Currently we are running Splunk Enterprise 6.5.3 and want to upgrade to 8.0. The server OS is Ubuntu 14.04.1 LTS We have the following questions: 1. Is it possible that upgrade directly to 8.0 fr... See more...
Currently we are running Splunk Enterprise 6.5.3 and want to upgrade to 8.0. The server OS is Ubuntu 14.04.1 LTS We have the following questions: 1. Is it possible that upgrade directly to 8.0 from 6.5.3 ? 2. Is there any dependency need to be installed, such as java or Python ?
Hi all, I have a subsearch that returns me the delta between two events. The problem is, sometimes the two events I´m looking for don´t exist. This results in the following Error in 'eval' comman... See more...
Hi all, I have a subsearch that returns me the delta between two events. The problem is, sometimes the two events I´m looking for don´t exist. This results in the following Error in 'eval' command: The expression is malformed. An unexpected character is reached at ')'. The subsearch looks like this: | eval DisruptionInSeconds = [ my subsearch that returns the delta between two events | sort - _time | stats sum(timeDeltaS) as search | eval search ="\"".search."\"" ] If these two events don´t exist, the search should return 0 (not NULL). How do I do that? Thanks in advance for your help.
I'm getting the same error while pulling the data from SNOW.Is there any way to fix it ? error :- 2020-01-20 10:01:48,435 ERROR pid=124003 tid=Thread-1 file=snow_data_loader.py:do_collect:177 |... See more...
I'm getting the same error while pulling the data from SNOW.Is there any way to fix it ? error :- 2020-01-20 10:01:48,435 ERROR pid=124003 tid=Thread-1 file=snow_data_loader.py:do_collect:177 | Failure occurred while connecting to https://roche.service-now.com/api/now/table/incident?sysparm_display_value=all&sysparm_limit=1000&sysparm_exclude_reference_link=true&sysparm_query=sys_updated_on>=2019-01-19+07:19:47^ORDERBYsys_updated_on. The reason for failure=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_data_loader.py", line 169, in do_collect "Authorization": "Basic %s" % credentials File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/httplib2_helper/httplib2_py2/httplib2/init.py", line 2135, in request cachekey, File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/httplib2_helper/httplib2_py2/httplib2/init.py", line 1796, in request conn, request_uri, method, body, headers File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/httplib2_helper/httplib2_py2/httplib2/init_.py", line 1737, in _conn_request response = conn.getresponse() File "/opt/splunk/lib/python2.7/httplib.py", line 1137, in getresponse response.begin() File "/opt/splunk/lib/python2.7/httplib.py", line 448, in begin version, status, reason = self._read_status() File "/opt/splunk/lib/python2.7/httplib.py", line 404, in _read_status line = self.fp.readline(_MAXLINE + 1) File "/opt/splunk/lib/python2.7/socket.py", line 480, in readline data = self._sock.recv(self._rbufsize) File "/opt/splunk/lib/python2.7/ssl.py", line 772, in recv return self.read(buflen) File "/opt/splunk/lib/python2.7/ssl.py", line 659, in read v = self._sslobj.read(len) SSLError: ('The read operation timed out',)
i have a search parameter for ex : search Data="Test". This data is there in the index and it has daily ingest and it has daily _time. This Data field is a filter , which i select and then it show... See more...
i have a search parameter for ex : search Data="Test". This data is there in the index and it has daily ingest and it has daily _time. This Data field is a filter , which i select and then it shows me all the data with Data="Test" via drilldown token. Now this field has changed to "NoTest" now when i choose from drilldown, i see "NoTest" instead of "Test". if i choose "NoTest", all previous data is not showing as they all have Data="Test": ( this table has historical data so i need to show both ) i need a way to show both "NoTest" and "Test" without changing much of query ( as other filters are unchanged, only this one has changed )
Hello, I have a list of malicious websites, which I would like to upload in SPLUNK and monitor if any users are trying to access malicious sites. Can you please help me with that? Thank you
I have one more issue which am facing. index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" "WANRT*" | rex field=eventuei "uei.opennms.org/nodes/node(?<bgpPeerState>... See more...
I have one more issue which am facing. index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" "WANRT*" | rex field=eventuei "uei.opennms.org/nodes/node(?<bgpPeerState>.+)" | eval Status=case(bgpPeerState=="Up", "UP", bgpPeerState=="Down", "DOWN", 1=1, "Other") | rename _time as Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") | dedup nodelabel sortby - Time_CST | table nodelabel Status Time_CST Output: nodelabel Status Time_CST NZSKB DOWN 03/24/20 10:33:33 GQPCW DOWN 03/24/20 10:30:15 EGSUM UP 03/24/20 10:19:39 GQHAN DOWN 03/24/20 10:16:57 FJVUD UP 03/24/20 10:05:20 PGPKC UP 03/24/20 09:58:09 is it possible to only display DOWN CASES in the dashboard I tried with | where =="DOWN" But it converted the whole UP's as DOWN.
I am ingesting azure metrics data using the TA-MS-AAD app but the data has a host field { [-] _time: 2020-03-26T08:09:00Z average: 2.8653846153846154 host: /subscriptions/xxxxxxxx-... See more...
I am ingesting azure metrics data using the TA-MS-AAD app but the data has a host field { [-] _time: 2020-03-26T08:09:00Z average: 2.8653846153846154 host: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/xxxxxxxxxxx/providers/Microsoft.Web/serverFarms/xxxxxxxxxxxxx metric_name: CpuPercentage namespace: microsoft.web/serverfarms subscription_id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx unit: Percent } I want to be able to group results by JSON host and not have the results polluted by server host name field extraction doesn't work 100% because the host field can be in different places in the raw text for the same metric 2 Examples {"metric_name": "CpuPercentage", "average": 0.65625, "_time": "2020-03-26T08:22:00Z", "host": "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Web/serverFarms/xxx", "namespace": "microsoft.web/serverfarms", "unit": "Percent", "subscription_id": "xxx"} {"subscription_id": "xxx", "host": "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Web/serverFarms/xxx", "metric_name": "CpuPercentage", "unit": "Percent", "_time": "2020-03-26T07:51:00Z", "average": 0.0, "namespace": "microsoft.web/serverfarms"} field alias just renames all host field names, unless there is a way to differentiate between the 2 any help is most appreciated
I tried to update the Identity lookup Expanded manually but i ended up deleting it. after that i started to get the below error messages: he limit has been reached for log messages in info.csv. 23... See more...
I tried to update the Identity lookup Expanded manually but i ended up deleting it. after that i started to get the below error messages: he limit has been reached for log messages in info.csv. 23 messages have not been written to info.csv. Please refer to search.log for these messages or limits.conf to configure this limit. [********.COM] Error 'Could not find all of the specified lookup fields in the lookup table.' for conf '(?::){0}XmlWinEventLog:' and lookup table 'identity_lookup_expanded'. [******.COM] Error 'Could not find all of the specified lookup fields in the lookup table.' for conf '(?:::){0}snow:' and lookup table 'identity_lookup_expanded'. [******.COM] Error 'Could not find all of the specified lookup fields in the lookup table.' for conf '(?i)source::....zip(.\d+)?' and lookup table 'identity_lookup_expanded'. [*****.COM] Error 'Could not find all of the specified lookup fields in the lookup table.' for conf 'ActiveDirectory' and lookup table 'identity_lookup_expanded'. [********.COM] Error 'Could not find all of the specified lookup fields in the lookup table.' for conf 'MSAD:NT6:DNS-Health' and lookup table 'identity_lookup_expanded' i managed to retrieve the old csv file and updated the "Identity lookup Expanded " file in splunk ((how i updated the "Identity lookup Expanded " is by uploading a new csv "x" which contains all the data and did |outputlookup Identity lookup Expanded )) but still the same errors occurs. should i wait until effect takes place or i need to something. thanks in advance
Hi @ All, i´ve got problems to parse the following file / content: "CreationTime","LastWriteTime","LastAccessTime","Name","Length","Directory" "25/03/2020 10:27:21","25/03/2020 10:27:36","25/0... See more...
Hi @ All, i´ve got problems to parse the following file / content: "CreationTime","LastWriteTime","LastAccessTime","Name","Length","Directory" "25/03/2020 10:27:21","25/03/2020 10:27:36","25/03/2020 10:27:21","01.txt","5","C:\Share" "25/03/2020 11:12:10","13/12/2019 11:48:07","25/03/2020 11:12:10","splunkforwarder-8.0.1.msi","68755456","C:\Share" "25/03/2020 10:28:04","25/03/2020 10:28:17","25/03/2020 10:28:04","01.txt","13","C:\Share\A" "25/03/2020 10:28:04","25/03/2020 10:28:32","25/03/2020 10:28:22","02.txt","12","C:\Share\A" "25/03/2020 10:28:53","25/03/2020 10:28:53","25/03/2020 10:28:53","Empty.zip","22","C:\Share\B" my problem is, that splunk dont regognise / use the header infomations and dont split per line. i tried with probs.conf CSV option, header check, filds delmiter, header delimter, quotes option, field names, etc etc... All options displays the same result... the header as event and one of the lines (randomly) as event... Anybody who can help me? THX - Markus
Hello, I am new to Splunk and trying to create some dashboards. I have multiselect field and a button. Multiselect input will search for specific field from a csv folder. Let's say 'name' field's ... See more...
Hello, I am new to Splunk and trying to create some dashboards. I have multiselect field and a button. Multiselect input will search for specific field from a csv folder. Let's say 'name' field's values and button will delete those values from the CSV. How can I achieve this? I converted the dashboard to HTML but could not figure out how to implement. Thanks and Cheers!
Hi, We have Splunk Enterprise in Cluster and in one SH we have installed ITSI and many critical searches are running. On that server, we are getting high CPU(more than 85%) usage and Splunkd is ta... See more...
Hi, We have Splunk Enterprise in Cluster and in one SH we have installed ITSI and many critical searches are running. On that server, we are getting high CPU(more than 85%) usage and Splunkd is taking more. Please help how can we reduce so that it work properly without affecting our production. Thanks and regards, Nikhil Dubey
Is there any way to ignore first and last line from my json files? { "hosts": { "sv-1000.local": [ { "Impact": "This may help attackers to defeat ti... See more...
Is there any way to ignore first and last line from my json files? { "hosts": { "sv-1000.local": [ { "Impact": "This may help attackers to defeat time based authentications schemes.", "Network": "DEB304C5", "ServiceandPort": "general (icmp)", "ScanDate": "2020-01-02 00:36" } ] }, "network": { "ScanCount": "21", "ScanDate": "2020-01-02 00:36:18", "ID": "DEB304C5", "Parent": "C8F90FE8", "Name": "-CAS" } }
I have query as shown below, |stats count by stn stn_status result: stn stn_state count stn_1 completed ... See more...
I have query as shown below, |stats count by stn stn_status result: stn stn_state count stn_1 completed 20 stn_1 In progress 2 stn_1 failed 8 stn_2 completed 30 stn_3 completed 10 stn_3 failed 10 expected results: Stn completed In progress failed stn_1 20 2 8 stn_2 30 stn_3 10 10 Help me on this, Thanks in Advance.
Is there any possible solution to monitor the inode usage of linux system in Splunk?
Hi, I am interested to create a search and alert when a specific set of OU's contains members. The OU should typically be empty and I would like to receive notification when the OU contains Compu... See more...
Hi, I am interested to create a search and alert when a specific set of OU's contains members. The OU should typically be empty and I would like to receive notification when the OU contains Computers or Users. I am new to Splunk so apologies in advance if this request is trivial. I have check the forum already before asking but am unable to find an answer. thanks
I am trying to make a filter that will filter out all VPXD, VPXA, and HOSTD data coming in from VM hosts. Below is excel sheet I use to define log use cases, green means I want to continue ingesting,... See more...
I am trying to make a filter that will filter out all VPXD, VPXA, and HOSTD data coming in from VM hosts. Below is excel sheet I use to define log use cases, green means I want to continue ingesting, yellow means I want to filter out Below is what the VPXA message looks when hitting port 514 on the the syslog server: Msg: 2020-03-26T04:09:53.295Z MyDomainName.com Vpxa: verbose vpxa[9164B70] [Originator@6876 sub=VpxaHalCnxHostagent opID=WFU-357897ba] Received WaitForUpdatesDone callback\0x0a Below is what the HOSTD message looks when hitting port 514 on the the syslog server: Msg: 2020-03-26T04:13:31.559Z MyDomainName.com Hostd: verbose hostd[FFC1B70] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest.disk, 40. Sent notification immediately.\0x0a Below is my current filter in place, I filter on hostname, I still want to do this. I just want it to drop any message with the HOSTD or VPXA process and keep everything else. Thanks for the help!