All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hai, I am looking for one match condition, Here is my requirement, <condition match="&quot;boilerrole&quot;== IN('$result.roles$')"> <set token="boiler">true</set> <unset token="turbine"></u... See more...
Hai, I am looking for one match condition, Here is my requirement, <condition match="&quot;boilerrole&quot;== IN('$result.roles$')"> <set token="boiler">true</set> <unset token="turbine"></unset> </condition> if my boilerrole matches to any of the value in "result.roles".(result.roles contains a results of multiple roles) --> my "boiler"  token should be true. is it possible. the above query is not worked for me, Can any one help me.
   It configure the timestamp to be the date when I upload the file. I want the timestamp to be like the highlighted one. How can I do that?
If I run the following command on an indexer after stopping Splunk and my session on the terminal times out after few hours but before the process finishes, does the repair process continues to run o... See more...
If I run the following command on an indexer after stopping Splunk and my session on the terminal times out after few hours but before the process finishes, does the repair process continues to run or will it stop ? If it continues to run, will the result be saved in some log file? if yes, which one ?     splunk fsck repair --all-buckets-all-indexes      
I am trying to figure out why does my Y-axis values are not showing. I've tried the below configuration but still, no luck.  "charting.axisTitleY.visibility": "visible", "charting.axisLabels... See more...
I am trying to figure out why does my Y-axis values are not showing. I've tried the below configuration but still, no luck.  "charting.axisTitleY.visibility": "visible", "charting.axisLabelsY.axisVisibility": "show", "charting.axisLabelsY.integerUnits" : "true", "charting.axisY.fields" : "true",  
Hi Team,   How to write the time format for 2021-07-30T03:22:00.0000000Z, the below one is not working %Y-%m-%dT%H:%M:%S.9N
Hi, all!  I am want to custom the search command when I click the element but I don't know how to write the search using values from the clicked element like Auto function.  For example, here's... See more...
Hi, all!  I am want to custom the search command when I click the element but I don't know how to write the search using values from the clicked element like Auto function.  For example, here's my dashboard and I hope that when I click on one of the Call_Session_ID, then it will jump to the search which combines the four different times with the same Call_Session_ID. Thanks a lot for your help!  
Hi Team, A former team configured the add-on for Active Directory and it has not been working for at least a few months now. The dashboards now display below error or "search auto-canceled". Extern... See more...
Hi Team, A former team configured the add-on for Active Directory and it has not been working for at least a few months now. The dashboards now display below error or "search auto-canceled". External search command 'ldapsearch' returned error code 1. Script output = "error_message=socket connection error while opening: [Errno 111] Connection refused ". Can you explain what this error means and what we can try to resolve it?   Thanks, Mark.      
I have table like below using my splunk query. Request1_tps Request1_avg Request1_p95 Request1_p90 Request2_tps Request2_avg Request2_p95 Request2_p90 10 1 1.2 1.1 20 2 2.2 2.1 ... See more...
I have table like below using my splunk query. Request1_tps Request1_avg Request1_p95 Request1_p90 Request2_tps Request2_avg Request2_p95 Request2_p90 10 1 1.2 1.1 20 2 2.2 2.1   I need to convert above table to below format. Can you provide search criteria for this. Thanks API tps avg p95 p90 Request1 10 1 1.2 1.1 Request2 20 2 2.2 2.1
We setup an Azure Event Hub to send logs to Splunk. We installed the Microsoft for Cloud Services add-on. We created an Azure Account and gave it a Reader role with Azure Event Hubs Data Receiver ass... See more...
We setup an Azure Event Hub to send logs to Splunk. We installed the Microsoft for Cloud Services add-on. We created an Azure Account and gave it a Reader role with Azure Event Hubs Data Receiver assigned as well. We then link that account in Splunk. We created a new Event Hub input in Splunk Cloud v8.2.2112.1. In Azure, we can the Event Hub is sending messages, but they're not being received into Splunk. First question is, what the roles necessary for the Azure Account for this to work correctly? Second question, I see in documentation and other sites people are using the connection string. But our window asks for the FQDN. We tried both, and neither work. Is there a specific format we have to use for the Event Hub Namespace (FQDN)? Finally, is there a query we can run in Splunk to search for errors associated with the Event Hub input? I tried to search using the SourceType in the Event Hub input and no logs are returned.
how can I pull and modify the inputs.conf file on over 2000+ universal forwarders? Can I do this by running a script  that I create in an app and deploy through Deployment server?     
I am setting _meta at the app level can i also set it in the /system/local or will one override the other   For example /myapp/inputs _meta name::bill  /system/local/inputs _meta last::da... See more...
I am setting _meta at the app level can i also set it in the /system/local or will one override the other   For example /myapp/inputs _meta name::bill  /system/local/inputs _meta last::dave so then the indexer would get both bill and dave
i would like to find a query where it is looking for the word 'DISK' &  ##% is above a certain percentage. i have the following but does not seem to work. (\N*Disk\D*)([0-9][0-9]|\d{2,})\% so from... See more...
i would like to find a query where it is looking for the word 'DISK' &  ##% is above a certain percentage. i have the following but does not seem to work. (\N*Disk\D*)([0-9][0-9]|\d{2,})\% so from the example below. i should only be left with "Logging Disk Usage 85%" example: CPU 99% Logging Disk Usage 85% /VAR log  87%
How do I stop my indexer consuming logs from a universal forwarder that was decommissioned. We did not remove the UF from the host before we gave the device away. It is no more connecting to the Depl... See more...
How do I stop my indexer consuming logs from a universal forwarder that was decommissioned. We did not remove the UF from the host before we gave the device away. It is no more connecting to the Deployment Server. Can null queue work? Host = frontwave source=WinEventLog:Security, WinEventLog:Application index=winder
I have two searches that I wanted to do some filtering before doing multisearch, Is that not possible? my code looks something like below.    | multisearch [search index="XXX" | table Field1 Field... See more...
I have two searches that I wanted to do some filtering before doing multisearch, Is that not possible? my code looks something like below.    | multisearch [search index="XXX" | table Field1 Field2] [search index="YYY" | table Field11 Field22 |dedup Field11 Field22] |table Field1 Field2 Field11 Field22   For this I am getting error message    Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 2 contains a non-streaming command).    
Hi. For about a month, Splunk was receiving syslog messages and indexing the time sent to it into the _time field correctly (the time received 'just worked'), but then on Jan 11, all syslog messag... See more...
Hi. For about a month, Splunk was receiving syslog messages and indexing the time sent to it into the _time field correctly (the time received 'just worked'), but then on Jan 11, all syslog messages from one network (172.16.2.x) had the timestamp ALWAYS BE EXACTLY Jan 11, 2022, 09:13. To be clear, all syslog messages from network b (172.18.100.x) continued working fine, with _time being updated to the timestamp received, but ever since that date, just syslog messages from that one network are 'stacking up' on that exact time stamp. I started with the linux servers themselves and found they are all running ntp and have the correct time and then ran tcpdump -vvi on Splunk to see the times coming in (which manually sending syslog messages) and looks good. I cannot figure out what is going on or where to even look next!!
Hi. I am running a Splunk query from the CLI and would like to export the results as rawdata to a file.  When I specify a value in maxout, it honors that number and exports the correct number of even... See more...
Hi. I am running a Splunk query from the CLI and would like to export the results as rawdata to a file.  When I specify a value in maxout, it honors that number and exports the correct number of events. However, I want all of the events - unlimited. So I set maxout to 0, per the documentation. When I do this, it exports nothing. The search just sits there forever, exporting nothing. Even if it's a quick and simple search.  Here is my query: splunk search "index=ldap earliest=01/24/2022:00:00:01 latest=01/25/2022:23:59:00" -output rawdata -maxout 0 > /mnt/splunk-backups/test/ldap-raw-test.log  I want all events to be outputted as rawdata to the specified file. Am I missing something? We are running Splunk Enterprise 8.1.4. Thanks in advance!
HI, I have events in splunk, where two fields description and msg denotes error messages. When I try to use to below. I tried renaming msg and description to same values but I am not getting an coun... See more...
HI, I have events in splunk, where two fields description and msg denotes error messages. When I try to use to below. I tried renaming msg and description to same values but I am not getting an count. index=work  status=failure | stats count by description |appendcols [search index=work tag=error | stats count by msg] I  see below result: Description   Count          msg                            10              Account locked Login failed    20 How can I get below? Error                  Count Account Locked 10 Login failed           20
Hi All, What I'm trying to do is to have a chart with time on x-axis and percentages by ResponseStatus on y-axis.  To do that I come up with the below Splunk search query:     match some http re... See more...
Hi All, What I'm trying to do is to have a chart with time on x-axis and percentages by ResponseStatus on y-axis.  To do that I come up with the below Splunk search query:     match some http requests | fields _time,ResponseStatus,RequestName | eval Date=strftime(_time, "%m/%d/%Y") | eval ResponseStatus=if(isnull(ResponseStatus), 504, ResponseStatus) | eventstats count as "totalCount" by Date | eventstats count as "codeCount" by Date,ResponseStatus | eval percent=round((codecount/totalCount)*100) | chart values(percent) by Date,ResponseStatus      But it is hitting the disk usage limit (500MB - which I can't increase) for a 10 days interval. And I'd like to be able to have this on a 3/4 months interval. What I have noticed is that If I only run the match part of the query, I get all the events without hitting any disk limit, which makes me think the problem is with the counting and group by part of the query. My guess is that Splunk is making the computation by keeping in-memory (or, trying to do so and eventually swapping to disk) the full event message even if I specified the useful fields via the fields command.  Is there any way to either effectively have Splunk ignore all the remaining part of the message or obtain the same result via a different path? Thanks a lot!
I have one user out of many that gets a red triangle error on a dashboard panel inside an app that uses a subsearch and gives the error:  [subsearch): [name] Search process did not exit cleanly, e... See more...
I have one user out of many that gets a red triangle error on a dashboard panel inside an app that uses a subsearch and gives the error:  [subsearch): [name] Search process did not exit cleanly, exit code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info. However if I copy the dashboard to the default search app the user does not get the error anymore. Any thoughts on why the behavior changes depending on the app the dashboard is in? I checked the search.log for error messages but there were none