All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My Splunk installation can't read files from windows host from a specific folder on the C:// drive. Logs are collected from another folder without problems. There are no errors in index _internal, st... See more...
My Splunk installation can't read files from windows host from a specific folder on the C:// drive. Logs are collected from another folder without problems. There are no errors in index _internal, stanza in inputs.conf looks standard, monitor on the folder and the path are specified correctly. The rights to the folder and files are system ones, as are other files that we can collect. What could be the problem?
Hello @BRFZ when was the last reboot on this search head ? looks like its hung up. I encourage to reach out to support if this not get resolved.       
Hello @msteffl . Are you using the scripted authentication? Do you also see any warnings like on the splunkd.log ?     " WARN AuthorizationManager [34567 TcpChannelThread] - Unknown role 'ldap_... See more...
Hello @msteffl . Are you using the scripted authentication? Do you also see any warnings like on the splunkd.log ?     " WARN AuthorizationManager [34567 TcpChannelThread] - Unknown role 'ldap_user"   If you also see the the "unknown role" error message, it might be AD group to Splunk Role mapping is failing on because it can't find a Splunk role definition for "ldap_user". Take a look at the "authorize.conf.   To troubleshoot this issue you will need to turn on debug for SAML on the SH and get the user to try and login again.  Once they have done that you can run the following to see if any roles are being retuned for the user: index=_internal sourcetype=splunkd samlp:response   Docs:  https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/ConfigureSSOinSplunkWeb https://docs.splunk.com/Documentation/Splunk/9.3.1/Security/Mapgroupstoroles https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureauthextensionsforSAMLtokens#Configure_authentication_extensions_for_Microsoft_Azure_using_Splunk_Web   Hope this helps. 
  Hi, I encountered an issue where my indexer disconnected from the search head (SH), and similarly, the SH and indexer1 disconnected from the deployment server and license master. I keep receiving... See more...
  Hi, I encountered an issue where my indexer disconnected from the search head (SH), and similarly, the SH and indexer1 disconnected from the deployment server and license master. I keep receiving the following error message:    Error [00000010] Instance name "A.A.A.A:PORT" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time: 2024-10-14T16:23:23.000+02:00; Failed 5 out of 5 times. I've tried re-adding the peer but the issue persists. Does anyone have suggestions on how to resolve this? Thanks in advance!
Edit Splunk systemd service unit file and edit/add the line under [service]    AmbientCapabilities=CAP_DAC_READ_SEARCH CAP_NET_ADMIN CAP_NET_RAW​  
Well, if your Splunk needs 13 seconds to scan just 14 thousand events... that looks weird. But if you have big events (like 100K-big jsons), considering your wildcard at the beginning of the search ... See more...
Well, if your Splunk needs 13 seconds to scan just 14 thousand events... that looks weird. But if you have big events (like 100K-big jsons), considering your wildcard at the beginning of the search term, the initial search might indeed be slow. So that's the first and probably the most important optimization you can do - if you can drop the wildcard at the beginning of *1000383334*, it will save you a lot of time. Notice that Splunk had to scan over 14k events just to match two of them. That's because it can't use indexed terms, it has to scan every single raw event. Since you're extracting  Odernumber (and rely on it being non-empty by including it in the BY clause for stats) using [EXT] as an anchor for your regex the [EXT] part must obviously be in your event. So if it's only in part of the events, you can use it as additional search term (square brackets are major breakers so you can just add EXT to your search terms). The inputlookup and join @ITWhisperer already covered. Dedup should _not_ be using much resources. As you can see from your job inspect table, it gets just 10 results on input and returns 2. It's not a huge amount. The main problem here is the initial search. Also if your events are big, you can drop _raw early on so you don't drag it along with you along the pipeline (you only use a few fields in your stats anyway).
We have no idea what data you uploaded and how. I assume you used the webui and went through the "add data" dialog but we have no idea what sourcetype(s) you used, whether you had proper timestamp re... See more...
We have no idea what data you uploaded and how. I assume you used the webui and went through the "add data" dialog but we have no idea what sourcetype(s) you used, whether you had proper timestamp recognition and so on. We have also no knowledge about how you are searching for that data. So the only answer we can give you is "search for your data properly". But seriously - you're giving us the equivalent of "I bought a computer, I did something with it and now it doesn't do what I want".
This worked for me, i had a Data Durability / Data Searcheable alert after the upgrade to 9.3.0 on Master Cluster Thanks!
Hello @tchimento_splun @rjteh_splunk  looks like this bug is still happening in 9.3.0
I have created an index to store my data on Splunk.  The data contains 5 csv files uploaded one by one in the index. Now, if I try to show the data inside  the index, it shows the latest data (the ... See more...
I have created an index to store my data on Splunk.  The data contains 5 csv files uploaded one by one in the index. Now, if I try to show the data inside  the index, it shows the latest data (the csv file that was uploaded at the end ) We can show the data of other files by querying, including specific source names, but by default, we can not see the whole data; we can only see the data of the last table. To overcome this challenge we have used joins to join all the tables and show them through the query in one report. I wanted to find out if there is a better way to do this. I have to show this data in Power BI, and for that, I should have a complete report of the data.
| timechart sum(count) as total span=1h | timewrap 1w | where strftime(_time,"%a") = strftime(now(),"%a") | eval hour=strftime(_time,"%H") | fields - _time | table hour *
It sounds like it is working, just not with the results you expect? Search has an implied AND so perhaps you need an explicit OR? | search node="$form.tokenNode$" OR outcome="$form.tokenSwitch$"
HI  hour 0 for count1 is the total of all the counts for 00:00 to 00:59 for the current day (Monday) in the current week.  hour 0 for count2 is the total of all the counts for 00:00 to 00:59 for th... See more...
HI  hour 0 for count1 is the total of all the counts for 00:00 to 00:59 for the current day (Monday) in the current week.  hour 0 for count2 is the total of all the counts for 00:00 to 00:59 for the current day (Monday) in the previous week hour 0 for count3 is the total of all the counts for 00:00 to 00:59 for the current day (Monday) in the Current week -2  So, in X Axis we have 0-24 hours for the current day and in the Y axis, we have 3 lines  count1: Count of particular hour of the day in the current week    count2 : Count of particular hour of the day in the previous week    count3 : Count of particular hour of the day in the  current week  Plan is to compare : when current day is Monday the count of 8th hour of Monday with the last week Monday and with the last to last week Monday.  the count of 9th hour of Monday with the last week Monday and with the last to last week Monday.  the count of 10th hour of Monday with the last week Monday and with the last to last week Monday.  and so on till 24th Hour  We have fields like Current_day (example Monday , Tuesday etc) , Current_Week (example 41 or 40 etc) extracted in the query. 
So, just to be clear, count1 is the sum of the hourly counts for the current week, e.g. hour 0 for count1 is the total of all the counts for 00:00 to 00:59 for all the days (so far) in the current we... See more...
So, just to be clear, count1 is the sum of the hourly counts for the current week, e.g. hour 0 for count1 is the total of all the counts for 00:00 to 00:59 for all the days (so far) in the current week, hour 0 for count2 is the total of all the counts for 00:00 to 00:59 for all the days in the previous week, etc.?
I'm sorry if I'm causing confusion. And not sure if you would call this drilldown. My requirement: I have two input fields(type - radio buttons). And depending on what value the user selects I wa... See more...
I'm sorry if I'm causing confusion. And not sure if you would call this drilldown. My requirement: I have two input fields(type - radio buttons). And depending on what value the user selects I want the filters to apply on two of the fields in the table namely node & outcome. For which I've written these statements in the table query: | search node="$form.tokenNode$" outcome="$form.tokenSwitch$" Node(radio button has below options): ABC DEF XYZ Outcome(radio button has below options): True False Both If I use one radio button(lets say just node) field it works, however when I add the second one it doesn't work. Means when im selecting node values from the radio button, the table is reloading & filtering the data based on the node value selected. I want the same to work for the second radio button as well. here is how these input (radio buttons look like): <input type="radio" token="tokenNode" searchWhenChanged="true"> <label>Node</label> <choice value="ABC">ABC</choice> <choice value="DEF">DEF</choice> <choice value="XYZ">XYZ</choice> <default>ABC</default> <initialValue>ABC</initialValue> </input> <input type="radio" token="tokenSwitch" searchWhenChanged="true"> <label>Outcome</label> <choice value="True">True</choice> <choice value="False">False</choice> <choice value="Both">Both</choice> <default>True</default> <initialValue>True</initialValue> </input> Hope I'm able to explain it.
Yes , I want to do the hourly count (0-23) in the X Axis.  X Axis = Hour of the day ( stored in the field Time ).  Y axis : 3 lines ( Count1 , count2 , count3) .  Count 1 : Corresponds to the ... See more...
Yes , I want to do the hourly count (0-23) in the X Axis.  X Axis = Hour of the day ( stored in the field Time ).  Y axis : 3 lines ( Count1 , count2 , count3) .  Count 1 : Corresponds to the count of record of current week at a particular hour.  Count 2 : Corresponds to the count of record of current week - 1  at a particular hour.  Count 3 : Corresponds to the count of record of current week - 2  at a particular hour.  Result should be like below:     
Given that the number of orders is always 1 (as previously explained and shown in your screenshot), the dedup is not actually doing anything useful and can be removed. This could affect the orders fi... See more...
Given that the number of orders is always 1 (as previously explained and shown in your screenshot), the dedup is not actually doing anything useful and can be removed. This could affect the orders field in that it could be more than 1. This could be resolved by either evaluating it to 1 after the stats command, or by using distinct count | stats dc(Ordernumber) AS orders by area aisle section movement_category movement_type Ordernumber _raw
This seems a bit confused - drilldown happens when the user clicks on a cell in the table. In your instance, this appears to set two tokens to the same value (based on where the user clicked). Your s... See more...
This seems a bit confused - drilldown happens when the user clicks on a cell in the table. In your instance, this appears to set two tokens to the same value (based on where the user clicked). Your search also includes using the value of two input tokens. When either of these inputs is changed, the search will run again, using the new values of the tokens. This isn't drilldown. This is just how inputs and tokens work. Please can you try to give more concrete examples of what your events look like, what the rest of your dashboard looks like, what you would like to happen when the user interacts with your dashboard, etc.?
You will need to clarify what it is you are trying to do - do you want an hourly count i.e. the x-axis is 0-23? If so, what has weekly counts got to do with it? What are count1, count2 and count3 in ... See more...
You will need to clarify what it is you are trying to do - do you want an hourly count i.e. the x-axis is 0-23? If so, what has weekly counts got to do with it? What are count1, count2 and count3 in this respect? What does your source data look like and what do you want your results to look like?
@ITWhisperer  Thanks for your response. As per your suggestion  I will take care of the join and replace that will lookup command.  I am adding screenshots of the results so that you can get a li... See more...
@ITWhisperer  Thanks for your response. As per your suggestion  I will take care of the join and replace that will lookup command.  I am adding screenshots of the results so that you can get a little more clarity. Below are the results while executing the above query. Order number is same but one entry is for "Storage" & other one for "Retrieval" .  Job inspection   while executing above query Do you have any suggestion so that I can replace dedup with some more optimized command?