All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search using timechart count by [value] and I'd like to set up an alert for when any of the values reach more than 25 results in 30 minutes. Search: index=[redacted] ... | rex fiel... See more...
I have a search using timechart count by [value] and I'd like to set up an alert for when any of the values reach more than 25 results in 30 minutes. Search: index=[redacted] ... | rex field=message "responseCode : (?<response>.*)," | rex field=message "errorMessageKey : (?<response>.*)," | timechart span=30m count by response usenull=f useother=f The response comes back like Application.errorMessage simple and short string. How can I achieve this?
Hello, Is there a Splunk server version for Raspberry Pi 4 (ARM processors)? Having this would be helpful for building and learning how to use Splunk, while deploying forwarders on other servers... See more...
Hello, Is there a Splunk server version for Raspberry Pi 4 (ARM processors)? Having this would be helpful for building and learning how to use Splunk, while deploying forwarders on other servers (Windows, Linux, MacOS, ...). Thanks
After searching the answered questions, I do not see my question addressed. If I have several indexes that are frozen to buckets, is there a way to search the buckets for an index without unfreezin... See more...
After searching the answered questions, I do not see my question addressed. If I have several indexes that are frozen to buckets, is there a way to search the buckets for an index without unfreezing the bucket? I'm trying to limit the number of buckets that I need to unfreeze. Thank you.
I have a column duration with this time format: 01:20:00.000000 . How do I convert time format from 01:20:00.000000 to "1 Hr 20 Mins"?
Hello, Rather than run three separate reports on three different dates, I'd like to run ONE report that only encapsulates the following dates: May 9, 2020, May 16, 2020, and May 23, 2020, and I'd... See more...
Hello, Rather than run three separate reports on three different dates, I'd like to run ONE report that only encapsulates the following dates: May 9, 2020, May 16, 2020, and May 23, 2020, and I'd like to search those days between the times 11:00 AM to 1:00 PM. Thank you for your help! Example of my search I'd like to incorporate it in: *"IP Address" OR "IP Address" OR "IP Address" | timechart count by src | sort -count*
When rendering maps from an unauthenticated server, we receive the error: "Unsupported Splunk version detected - Maps+ for Splunk requires Splunk 7.x". However, when already authenticated to the host... See more...
When rendering maps from an unauthenticated server, we receive the error: "Unsupported Splunk version detected - Maps+ for Splunk requires Splunk 7.x". However, when already authenticated to the hosting splunk server with Splunk web credentials as you would for normal searches, the maps loads properly with no error. What could be the cause of this behavior and how do we correct?
Hi guys , I want to expand disk space for indexer hosted on Azure as VM and its an indexer cluster which completely on Azure the only forwarder is on-prem. Note: Used terraform scripts when cre... See more...
Hi guys , I want to expand disk space for indexer hosted on Azure as VM and its an indexer cluster which completely on Azure the only forwarder is on-prem. Note: Used terraform scripts when created VM's on azure last year and used ansible as an automation for installing all Splunk instances and configurations. Kindly help me in providing steps to be followed on azure end or indexer end, any suggestion is welcome!
I have a Search Head Cluster able to query in two index cluster. It used to be linked to a single index cluster and It was working fine, but at the moment I linked It to the second index cluster It s... See more...
I have a Search Head Cluster able to query in two index cluster. It used to be linked to a single index cluster and It was working fine, but at the moment I linked It to the second index cluster It started to receive partial results when querying from indexes of any of these clusters. It also shows me an error message in the job that says: Unable to distribute to peer named indexer-b1 at uri=172.16.xx.xxx:8089 using the uri-scheme=https because peer has status=Down. Verify uri-scheme, connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. This server, indexer-b1 at uri=172.16.xx.xxx:8089, is part of the second Index Cluster. But although It says me It's down, In the other cluster master, looking in DMC, This server is completely ok. At first sight, It seems that one index cluster is trying to sync with the other? If this is the case, how can I isolate them?
Hello, download the trial version of splunk 8.0.1 but I want to access the SSH with putty but I don't know the configuration or how can I enter? Cheers
We recently upgraded to from 7.1.2 to 8.0.3 on on-prem Splunk Enterprise. A previously working saved search is no longer returning the correct results. | transaction session_id maxspan=30s ... See more...
We recently upgraded to from 7.1.2 to 8.0.3 on on-prem Splunk Enterprise. A previously working saved search is no longer returning the correct results. | transaction session_id maxspan=30s Looking into it looks like the transaction command is no longer closing connections when the maxspan (30s) value is hit. This leaves all transactions open and then the search ends when it hits the default of 5000. I need to create transactions out of 650000 entries (two or three lines each), so needless to say this search no longer functions. I can confirm this behavior, by: | stats count by closed_txn shows all the transactions returned as closed_txn=0 adding maxopentxn=5500 to the transaction command causes the number of returned results to go from 5000 to 5500 adding maxevents=2 only closes some of the events closed_txn , eventcount , count 0 1 1041 0 2 4458 1 2 1654 Transactions are supposed to close when: The ' closed_txn ' field is set to ' 1 ' if one of the following conditions is met: maxevents , maxpause , maxspan , startswith . https://docs.splunk.com/Documentation/Splunk/8.0.3/SearchReference/Transaction -> Memory control options -> keepevicted
I'm seeing Splunk Enterprise Version 8.0.2 Build a7f645ddaf91 running Windows Server 2019, build 17763.1217. Individual search heads in a cluster crash with no log messages in Splunk or event logs a... See more...
I'm seeing Splunk Enterprise Version 8.0.2 Build a7f645ddaf91 running Windows Server 2019, build 17763.1217. Individual search heads in a cluster crash with no log messages in Splunk or event logs aside from a .dmp file: ntdll!RtlpWaitOnCriticalSection+0x87: 00007ff8`3c99df33 ff4124 inc dword ptr [rcx+24h] ds:00000000`00000024=???????? Resetting default scope EXCEPTION_RECORD: (.exr -1) ExceptionAddress: 00007ff83c99df33 (ntdll!RtlpWaitOnCriticalSection+0x0000000000000087) ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000001 Parameter[1]: 0000000000000024 Attempt to write to address 0000000000000024 PROCESS_NAME: splunkd.exe WRITE_ADDRESS: 0000000000000024 ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%p referenced memory at 0x%p. The memory could not be %s. EXCEPTION_CODE_STR: c0000005 EXCEPTION_PARAMETER1: 0000000000000001 EXCEPTION_PARAMETER2: 0000000000000024 Has anyone seen this issue before?
We used the inner join command to get the matching files. However, the same command does not work with the current format of the events. Hence we extracted (rex) the data. Here is the current search ... See more...
We used the inner join command to get the matching files. However, the same command does not work with the current format of the events. Hence we extracted (rex) the data. Here is the current search that is not working. I would appreciate it if we could get alternatives to this. The total number of files is 2605. 7 files do not match and 2598 files match. We need the search to work for the matching files. index=xyz source = FILE sourcetype = syncsort:file JOBNAME="xyz-B" | rex field=_raw "\S{45}\s\S{11}\s\S{45}\s*\S{92}(?[A-Z0-9].*)" | join type=inner DATA [ search index=xyz source = FILE sourcetype = syncsort:file JOBNAME="xyz-R" | rex field=_raw "\S{45}\s\S{11}\s\S{45}\s*\S{92}(?[A-Z0-9].*)" | fields DATA] | stats count as COUNT by DATA | addcoltotals labelfield=DATA label="Total"
Hello, I've installed Centurion - Threat Hunting Feed Aggregator v1.0.1 on Splunk Enterprise version 7.2.9.1 and I need to configure a proxy for exit on the internet. Any suggestion for where to ... See more...
Hello, I've installed Centurion - Threat Hunting Feed Aggregator v1.0.1 on Splunk Enterprise version 7.2.9.1 and I need to configure a proxy for exit on the internet. Any suggestion for where to put proxy settings for quick resolving with a WA? I suggest making a modification on the next App version to add the option to allow the user to modify proxy settings through the App web interface on Splunk. Regards
The Status Indicator app is not showing the results in the sorted manner when displaying the visualization in trellis format. I have a search whose output is in the sorted order like belo : (image at... See more...
The Status Indicator app is not showing the results in the sorted manner when displaying the visualization in trellis format. I have a search whose output is in the sorted order like belo : (image attached) Total Divisions Total Systems Total Equipments Overall Initiatives When I apply status indicator app viz, it takes random order like below: (image attached) Overall Initiatives Total Divisions Total Equipments Total Systems How can this be fixed? Please help. I want to show the order as per the search result. Attached are the images for the issue. ![alt text][2] [2]: /storage/temp/291860-status-indicator-improper-order.png
I have 400+ error codes and want to search them. The issue is my search for multiple codes for 5 months freezes (there may be a limit on search?). My question is how can I iterate over error... See more...
I have 400+ error codes and want to search them. The issue is my search for multiple codes for 5 months freezes (there may be a limit on search?). My question is how can I iterate over error codes, and search one error code at a time, and have Splunk save it before working on the next error code?
I have installed this add-on on Splunk's new version 8.0.3. I have my Splunk also on Azure. I do not see any logs collected from EventHub. I wanted to find out if this is compatible with Splunk 8.0.3.
Hello Team We have configured the machine agent in one of the Linux machines. It was reporting all the metrics CPU, Memory, Disk for 5 to 10 min after that, it's reporting only the availability ... See more...
Hello Team We have configured the machine agent in one of the Linux machines. It was reporting all the metrics CPU, Memory, Disk for 5 to 10 min after that, it's reporting only the availability In agent logs, we can see the below error log: An error occurred while running the collector script, enable debug logging for more information. Kindly help me to resolve this issue ^ Edited by @Ryan.Paredez for improved title and readability 
I want to implement automatic mail alerts triggered for the client applications which invoke our application through web services by get/post requests, in order to find a high number of requests inco... See more...
I want to implement automatic mail alerts triggered for the client applications which invoke our application through web services by get/post requests, in order to find a high number of requests incoming from an application or slow requests caused by an application I also want to reduce the manual efforts by searching each and every index of calls and by getting the client hostname by xforwarded and by DNS lookup in Splunk. The goal is to be more proactive about planning for this. Please help me out.
I started my trial on May 21st and all was fine until I tried to log in today May 26th; it won't accept the admin creds & other user accounts are also unable to login. The only error returned is 'L... See more...
I started my trial on May 21st and all was fine until I tried to log in today May 26th; it won't accept the admin creds & other user accounts are also unable to login. The only error returned is 'Login Failed'. I'm unable to raise a support ticket, not entitled & the accounts team told me to post here, not great. I have checked the logs for the universal forwarders and they have stopped sending data, the Splunk Cloud instance is refusing the connection (assuming the UF credentials have changed). There seems to be nothing I can do, I can't request a new instance; not really a great customer experience. Any help or pointers with how to resolve this (or who to contact) would be appreciated.
Good morning Splunkers, I trust everyone is remaining safe. Ultimately, I'm attempting to obtain the overage connection duration of external IPs for each destination zone based on firewall logs. Th... See more...
Good morning Splunkers, I trust everyone is remaining safe. Ultimately, I'm attempting to obtain the overage connection duration of external IPs for each destination zone based on firewall logs. The reporting period would be 24h. So the output I'd be looking for is something like this: dest_zone AvgDuration ABC App 00:00:07:123 123 Zone 00:00:13:123 Cisco VPN 00:07:12:004 Please see my non-working query below: index="pan_logs" sourcetype="pan_traffic" action="allowed" | eventstats earliest(_time) as earliest_time by src_ip | eventstats latest(_time) as latest_time by src_ip | eval Duration=latest_time-earliest_time | stats avg(Duration) as AvgDuration by src_ip | eval AvgDuration = strftime(AvgDuration/1000 , "%H:%M:%S:%3Q") | stats values(AvgDuration) by dest_zone As always, any help is greatly appreciated.