All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i would like to find a query where it is looking for the word 'DISK' &  ##% is above a certain percentage. i have the following but does not seem to work. (\N*Disk\D*)([0-9][0-9]|\d{2,})\% so from... See more...
i would like to find a query where it is looking for the word 'DISK' &  ##% is above a certain percentage. i have the following but does not seem to work. (\N*Disk\D*)([0-9][0-9]|\d{2,})\% so from the example below. i should only be left with "Logging Disk Usage 85%" example: CPU 99% Logging Disk Usage 85% /VAR log  87%
How do I stop my indexer consuming logs from a universal forwarder that was decommissioned. We did not remove the UF from the host before we gave the device away. It is no more connecting to the Depl... See more...
How do I stop my indexer consuming logs from a universal forwarder that was decommissioned. We did not remove the UF from the host before we gave the device away. It is no more connecting to the Deployment Server. Can null queue work? Host = frontwave source=WinEventLog:Security, WinEventLog:Application index=winder
I have two searches that I wanted to do some filtering before doing multisearch, Is that not possible? my code looks something like below.    | multisearch [search index="XXX" | table Field1 Field... See more...
I have two searches that I wanted to do some filtering before doing multisearch, Is that not possible? my code looks something like below.    | multisearch [search index="XXX" | table Field1 Field2] [search index="YYY" | table Field11 Field22 |dedup Field11 Field22] |table Field1 Field2 Field11 Field22   For this I am getting error message    Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 2 contains a non-streaming command).    
Hi. For about a month, Splunk was receiving syslog messages and indexing the time sent to it into the _time field correctly (the time received 'just worked'), but then on Jan 11, all syslog messag... See more...
Hi. For about a month, Splunk was receiving syslog messages and indexing the time sent to it into the _time field correctly (the time received 'just worked'), but then on Jan 11, all syslog messages from one network (172.16.2.x) had the timestamp ALWAYS BE EXACTLY Jan 11, 2022, 09:13. To be clear, all syslog messages from network b (172.18.100.x) continued working fine, with _time being updated to the timestamp received, but ever since that date, just syslog messages from that one network are 'stacking up' on that exact time stamp. I started with the linux servers themselves and found they are all running ntp and have the correct time and then ran tcpdump -vvi on Splunk to see the times coming in (which manually sending syslog messages) and looks good. I cannot figure out what is going on or where to even look next!!
Hi. I am running a Splunk query from the CLI and would like to export the results as rawdata to a file.  When I specify a value in maxout, it honors that number and exports the correct number of even... See more...
Hi. I am running a Splunk query from the CLI and would like to export the results as rawdata to a file.  When I specify a value in maxout, it honors that number and exports the correct number of events. However, I want all of the events - unlimited. So I set maxout to 0, per the documentation. When I do this, it exports nothing. The search just sits there forever, exporting nothing. Even if it's a quick and simple search.  Here is my query: splunk search "index=ldap earliest=01/24/2022:00:00:01 latest=01/25/2022:23:59:00" -output rawdata -maxout 0 > /mnt/splunk-backups/test/ldap-raw-test.log  I want all events to be outputted as rawdata to the specified file. Am I missing something? We are running Splunk Enterprise 8.1.4. Thanks in advance!
HI, I have events in splunk, where two fields description and msg denotes error messages. When I try to use to below. I tried renaming msg and description to same values but I am not getting an coun... See more...
HI, I have events in splunk, where two fields description and msg denotes error messages. When I try to use to below. I tried renaming msg and description to same values but I am not getting an count. index=work  status=failure | stats count by description |appendcols [search index=work tag=error | stats count by msg] I  see below result: Description   Count          msg                            10              Account locked Login failed    20 How can I get below? Error                  Count Account Locked 10 Login failed           20
Hi All, What I'm trying to do is to have a chart with time on x-axis and percentages by ResponseStatus on y-axis.  To do that I come up with the below Splunk search query:     match some http re... See more...
Hi All, What I'm trying to do is to have a chart with time on x-axis and percentages by ResponseStatus on y-axis.  To do that I come up with the below Splunk search query:     match some http requests | fields _time,ResponseStatus,RequestName | eval Date=strftime(_time, "%m/%d/%Y") | eval ResponseStatus=if(isnull(ResponseStatus), 504, ResponseStatus) | eventstats count as "totalCount" by Date | eventstats count as "codeCount" by Date,ResponseStatus | eval percent=round((codecount/totalCount)*100) | chart values(percent) by Date,ResponseStatus      But it is hitting the disk usage limit (500MB - which I can't increase) for a 10 days interval. And I'd like to be able to have this on a 3/4 months interval. What I have noticed is that If I only run the match part of the query, I get all the events without hitting any disk limit, which makes me think the problem is with the counting and group by part of the query. My guess is that Splunk is making the computation by keeping in-memory (or, trying to do so and eventually swapping to disk) the full event message even if I specified the useful fields via the fields command.  Is there any way to either effectively have Splunk ignore all the remaining part of the message or obtain the same result via a different path? Thanks a lot!
I have one user out of many that gets a red triangle error on a dashboard panel inside an app that uses a subsearch and gives the error:  [subsearch): [name] Search process did not exit cleanly, e... See more...
I have one user out of many that gets a red triangle error on a dashboard panel inside an app that uses a subsearch and gives the error:  [subsearch): [name] Search process did not exit cleanly, exit code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info. However if I copy the dashboard to the default search app the user does not get the error anymore. Any thoughts on why the behavior changes depending on the app the dashboard is in? I checked the search.log for error messages but there were none
I have created a windows level brute force attack alert to alert me when X number of authentication failures occur in a 15 min interval. sometimes I see an alert where the user is the same hostname ... See more...
I have created a windows level brute force attack alert to alert me when X number of authentication failures occur in a 15 min interval. sometimes I see an alert where the user is the same hostname but ends in the $ sign I have searched the windows documentation but it is not completely clear to me, could someone give me their opinion on what it is, if it is relevant or a false positive.   tnx
Hi Team, I need to use print two values from an index with different earliest values. please find the below example. index=abcd cust_name="*" earliest=-30d@d | fields cust_name,origUserid,destUs... See more...
Hi Team, I need to use print two values from an index with different earliest values. please find the below example. index=abcd cust_name="*" earliest=-30d@d | fields cust_name,origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID | eval id =coalesce(origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID) | rename cust_name as Customer | dedup id | stats count(id) as MAU by Customer | appendcols [ search index=abcd cust_name="*" earliest=-14d | fields cust_name,origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID | eval id =coalesce(origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID) | rename cust_name as Customer | dedup id | stats count(id) as DAU by Customer | eval DAU=round(DAU/14,2) ] My problem is I am using two earliest values so in my first earliest value I have a customer count of 57 and on the second earliest value customer count is 43, so while printing both the values it's printing the wrong value in one column.
I have parts of a Windows .Net application that are installed as services and run as services under an account on Windows Server.   I would like to monitor the number of threads that these services... See more...
I have parts of a Windows .Net application that are installed as services and run as services under an account on Windows Server.   I would like to monitor the number of threads that these services are consuming.   Is there a way to do that with Splunk?  I have begun looking at the documentation for APM but I am not convinced this is possible under Windows + .Net.   Thoughts?  Suggestions?   Regards, Joel
I would like to determine how many times an app on a deployment server has been deployed.  I'm not concerned with the host information.  I'm trying to determine which Apps are no longer being used an... See more...
I would like to determine how many times an app on a deployment server has been deployed.  I'm not concerned with the host information.  I'm trying to determine which Apps are no longer being used and can be archived.  I suspect it would come from the | rest command. | rest splunk_server=127.0.0.1 /services/deployment/server/clients I'm just not sure which fields and how to accurately calculate this. Looking for suggestions.
Hi  There,     Good day ,  Is there a SPL based set up to look for UF connectivity on port 9997 to Non Splunk Destinations. And is there any documentation in splunk finding what IP addresses splu... See more...
Hi  There,     Good day ,  Is there a SPL based set up to look for UF connectivity on port 9997 to Non Splunk Destinations. And is there any documentation in splunk finding what IP addresses splunk cloud uses or if splunk has allocated IP address range. Thanks in advance for the help and support.  
Hello,  I went through some of the on-line resources, to have a clear idea on what Protocols SPLUNK UF/HF uses to send data/events to SPLUNK indexer. But I couldn't get any clear ideas. Any help/inf... See more...
Hello,  I went through some of the on-line resources, to have a clear idea on what Protocols SPLUNK UF/HF uses to send data/events to SPLUNK indexer. But I couldn't get any clear ideas. Any help/info on what protocol UF/HF uses ....would be highly appreciated. Thank you so much.  
Hi, I have splunk Waiting for queued job to start getting error for a particular user however no jobs are queued for that user in the Job monitor, the role can run 50 concurrent searches and is only... See more...
Hi, I have splunk Waiting for queued job to start getting error for a particular user however no jobs are queued for that user in the Job monitor, the role can run 50 concurrent searches and is only affected this user. Also no jobs are in queued,Parsing,Finalising or Finalised.  Any advice?   Thanks
Hi folks, Hoping you might be able to help. I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS: Wind... See more...
Hi folks, Hoping you might be able to help. I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS: Windows 10/Windows Server OS Build: 10.0.18362 Target Name: NACORP NetBIOS Domain Name: NACORP NetBIOS Computer Name: 1234ABC DNS Domain Name: na.corp.xxxxx.com System Time: 2021-12-12 22:55:40.534959 Authorized Use Only - v 041001 This system is for the use of authorized users only. How do I extract specific pieces from it?  I need NetBIOS Domain Name and Computer Name and System Time I thought of using a regex but not sure how to build it since NetBIOS Domain Name values can be of different variations e'g one will be NACORP another one will be USON and another wil be NACORP.LOCAL Similar with the computer name - there's no unified naming convention - it depends on a country we get the logs from. some can have a "-" in the middle, some will have a "." etc. Any hints and tips on how to tackle it will be more than appreciated!
I upgraded Splunk Enterprise to 8.1.8 from 8.0.6.   I am now getting messages where 45 days are allowed over 60 days to go over the indexing limit.   Looking at the indexing, the largest amount are f... See more...
I upgraded Splunk Enterprise to 8.1.8 from 8.0.6.   I am now getting messages where 45 days are allowed over 60 days to go over the indexing limit.   Looking at the indexing, the largest amount are from internal Splunk.  I have a single instance.  The first three indexes are internal Splunk The largest source is the Splunk Metrics log And lastly, the sourcetypes splunk_metrics_log and splunkd are a major portion of indexed data My question is, why is the internal Splunk processes counting towards my indexing?     Regards, Scott Runyon    
Hi folks,  Hoping you might be able to help.  I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS:... See more...
Hi folks,  Hoping you might be able to help.  I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS: Windows 10/Windows Server OS Build: 10.0.18362 Target Name: NACORP NetBIOS Domain Name: NACORP NetBIOS Computer Name: 1234ABC DNS Domain Name: na.corp.xxxxx.com System Time: 2021-12-12 22:55:40.534959 Authorized Use Only - v 041001 This system is for the use of authorized users only.  How do I extract specific pieces from it?  I need NetBIOS Domain Name and Computer Name and System Time I thought of using a regex but not sure how to build it since NetBIOS Domain Name values can be of different variations e'g one will be NACORP another one will be USON and another wil be NACORP.LOCAL  Similar with the computer name - there's no unified naming convention - it depends on a country we get the logs from. some can have a "-" in the middle, some will have a "." etc. Any hints and tips on how to tackle it will be more than appreciated!
Hi everyone. I have three charts in a panel in a Simple XML dashboard and I'm trying to programmatically (i.e., with tokens) sync the maximum value of the Y axis. The idea is that the value, determin... See more...
Hi everyone. I have three charts in a panel in a Simple XML dashboard and I'm trying to programmatically (i.e., with tokens) sync the maximum value of the Y axis. The idea is that the value, determined by the maximum value of all three charts, is used in all three charts. Any clues? I tried setting a token in each of the three searches and then another token as max of these tokens. The resulting token was used to set the maxY value, but it doesn't work. This is the token initialization:     <init> <set token="max_y_version">0</set> <set token="max_y_version1">0</set> <set token="max_y_version2">0</set> <set token="max_y_version3">0</set> </init>       And this is an example chart (I have three of these):     <chart> <search> <done> <eval token="max_y_version1">$result.max_count$</eval> <eval token="max_y_version">max($max_y_version1$, $max_y_version2$, $max_y_version3$)</eval> </done> <query>(some query which creates results containing max_count in the first row...)</query> </search> <option name="charting.axisY.maximumNumber">$max_y_version$</option> <option name="charting.axisTitleX.text">Date</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">top</option> <option name="refresh.display">progressbar</option> </chart>       Any clues? Thanks.
Hello everyone, i need a small help    In my search head splunkSRCH637T7.local If I search for any logs with sourcetype , logs not getting genarating,  Health status of splund is TailReader  Roo... See more...
Hello everyone, i need a small help    In my search head splunkSRCH637T7.local If I search for any logs with sourcetype , logs not getting genarating,  Health status of splund is TailReader  Root cause: the monitor input cannot produce data because splunkds processing queues are full . This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data,   COULD ANY ONE PLEASE HELP  PLEASE SEE MY SCREEN SHOT