All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @richgalloway,   Do you have any documentation that validates the possibility of the fishbucket's size being up to four times larger than the limit specified in the limits.conf file? Any offici... See more...
Hi @richgalloway,   Do you have any documentation that validates the possibility of the fishbucket's size being up to four times larger than the limit specified in the limits.conf file? Any official resources or explanations that could clarify why the fishbucket index might exceed the configured threshold by such a significant margin would be extremely helpful. Concerning TA-nmon: I've noticed that it monitors the server by generating new CSV files every minute, and it deletes the older ones. I suspect that this process could incrementally increase the size of the fishbucket, as it continuously logs the CRCs of newly created log files without removing the CRCs of the old, deleted logs. This situation seems to be evidenced by the _internal log errors related to checksum faild when the log files no longer exist.
There is no relationship between id and duration. It's jut that since there can be multiple id, we need to only accept msg="consumed event" events where the id="XYZ". When i run the below query i g... See more...
There is no relationship between id and duration. It's jut that since there can be multiple id, we need to only accept msg="consumed event" events where the id="XYZ". When i run the below query i get msg="consumed event" for every event, regardless what their id is.
Hi, Is it possible to add a deployer  server to existing search head cluster? Existing search heads are fresh and we have no problem about loss any configuration. By the way the mentioned search h... See more...
Hi, Is it possible to add a deployer  server to existing search head cluster? Existing search heads are fresh and we have no problem about loss any configuration. By the way the mentioned search head cluster is connected to a multisite indexer cluster and each search head's site attribute set to "site0".
I need to run a Splunk search with "transaction" command and I have four pattern variations for the start of the transaction and two pattern variations for the end of that transaction. I read the do... See more...
I need to run a Splunk search with "transaction" command and I have four pattern variations for the start of the transaction and two pattern variations for the end of that transaction. I read the documentation and experimented but still not sure how exactly I should do this. I am operating on complex extensive data so it's not immediately clear whether I am doing this correctly and I need to get it right. I tried the following: 1. Wildcards in startswith and endswith: "endswith=...*..." 2. The syntax "endswith=... OR endswith=...".     -- same for startswith 3. The syntax "endswith=... OR ...". 4. Regular expressions instead of wildcards: .* instead of * Could you suggest the right way of doing this? Thank you!
You need to give more detail about your data - what do you want to occur when there are multiple IDs per event and you want to see averages for XYZ? In that example what does duration mean when there... See more...
You need to give more detail about your data - what do you want to occur when there are multiple IDs per event and you want to see averages for XYZ? In that example what does duration mean when there are two customer ids? If you add the extra mvexpand line after the | stats count values(duration) as duration values(id) as id by event_id | mvexpand id  
Run the query without the last line and you will see what the results are before the average
One thing i forgot to mention is that event.data.id can be an array: event: {     data: {          [              {                    id: "XYZ"              }             {                  ... See more...
One thing i forgot to mention is that event.data.id can be an array: event: {     data: {          [              {                    id: "XYZ"              }             {                    id: "123"              }          ]    } }
The query u provided does display the average duration, however for id="XYZ" doesn't seem to be check, because i get events for everything.
Not sure of how dashboard studio does it, but in Simple XML when you select a value in a radio, the $label$ token is set to the label, so you can use a <change> block to assign that to a token - mayb... See more...
Not sure of how dashboard studio does it, but in Simple XML when you select a value in a radio, the $label$ token is set to the label, so you can use a <change> block to assign that to a token - maybe there is something similar in DS
You could externalise the versions to a lookup file and the query could get the versions from that lookup file and use those values in the query, e.g. if you had a lookup file with  version,date_fro... See more...
You could externalise the versions to a lookup file and the query could get the versions from that lookup file and use those values in the query, e.g. if you had a lookup file with  version,date_from 3.10.0-1160.92.1.el7.x86_64,2023-11-06 then the query could use a subsearch to get the latest version based on date_from field in the lookup to use in the query. As for how to update that automatically, it would depend on where your data is coming from. You could use the REST api to perform actions on the Splunk server. 
Please try not to keep editing and deleting messages, as I just lost my reply as a result. So here again... Try this source=abc type=Changed event_id="*" (msg="consumed event" OR (msg=" finished pr... See more...
Please try not to keep editing and deleting messages, as I just lost my reply as a result. So here again... Try this source=abc type=Changed event_id="*" (msg="consumed event" OR (msg=" finished processing" AND duration>1) ``` Rename consumed customer Id ``` | rename event.data{}.id as id ``` Join start/finish events together for the event_id | stats count values(duration) as duration values(id) as id by event_id ``` and filter only for customer ``` | where id="XYZ" AND count=2 ``` Get the average ``` | stats avg(duration) as duration This assumes there is more than one sequence per customer. Will one customer have more than one pair of events for each event id? This counts the events to make sure that there are two events per event_id and then filters for your customer and then gets the average of all events    
This example will calculate those ranks from the base data of Student+Score, which uses eventstats to build the collection of scores (stats list) and then mvfind to find the position in the list and ... See more...
This example will calculate those ranks from the base data of Student+Score, which uses eventstats to build the collection of scores (stats list) and then mvfind to find the position in the list and then calculate rank. | makeresults count=10 | fields - _time | streamstats c as Score | eval Student="Student ".(11 - Score) | table Student Score ``` Above simulates your data ``` ``` Generate list of scores and find position in results ``` | sort Score | eventstats count list(Score) as Scores | eval pos=mvfind(Scores, "^".Score."$") ``` Now calculate ranks ``` | eval Rank_Inc=round(pos/(count-1)*100, 0) | eval Rank_Exc=round((pos+1)/(count+1)*100, 0) | fields - Scores pos count The bit from sort score is what you want 
So here are two examples of events: Event #1 Event #1 { source: api event_id: abcde msg: "consumed event" type: abc event: { data: { id: 12345 } } } Event #2 { sour... See more...
So here are two examples of events: Event #1 Event #1 { source: api event_id: abcde msg: "consumed event" type: abc event: { data: { id: 12345 } } } Event #2 { source: api event_id: abcde msg: "finished processing event" type: abc duration: 0.023456789 } first need to query for msg="consumed event" and msg="finished processing event" that have the same event_id. Also need to only accept the ones that for msg="consumed event", its event.data.id is specific for a value: Query #1 source=api AND msg="consumed event" | rename event.data{}.id AS id | where id=12345 Query #2 source=api AND msg="finished processing event" AND duration>0 | stats ave(duration) I need to joion these queries into one where their event_id field is the same and at the end calculate and display the average duration which is a field only in events where msg="finished processing event"
Can i please get some help ?
Can i please get some expert help on this ?
That sort of functionality is quite involved. The descriptions below are for Simple XML  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML https://docs.splunk.... See more...
That sort of functionality is quite involved. The descriptions below are for Simple XML  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML https://docs.splunk.com/Documentation/Splunk/9.1.1/Viz/tokens https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Collect In your dashboard you will need something like <drilldown> <set token="clicked_url">$row.URL$</set> <set token="run_collect"></set> </drilldown> You will need a global search that does the collect, e.g. something like this will create a new row with your user/URL and collect that to your index. This search is dependent on the run_collect token being set, so will only happen when the user clicks the row. It will unset the token at the end, so it can run again. <search depends="$run_collect$"> <query> | makeresults | eval user=$env:user|s$ | eval URL=$clicked_url|s$ | collect index=your_index </query> <done> <unset token="run_collect"> </done> </search> As to hiding the data on the dashboard, is this to be hidden forever or just in this version of the dashboard? Much will depend on that. You don't actually need to collect it to an index, you could write this information to a CSV lookup, which may be a better option if you want to hide this information all the time once clicked. In that case you could use that as a lookup to filter previous clicked information. Anyway, this is a fairly complex use case for a dashboard - it's certainly all possible, but this is a bit more involved for a basic dashboard. This is NOT applicable to dashboard studio - you mention both in your question tags. In principle the approach could be the same, but the technical implementation is not as above.  
How to calculate percentrank in Splunk? I appreciate your help Below is the expected result:   Percentrank exc and Percentrank inc are excel functions. Student Score Percentrank exc Percen... See more...
How to calculate percentrank in Splunk? I appreciate your help Below is the expected result:   Percentrank exc and Percentrank inc are excel functions. Student Score Percentrank exc Percentrank inc Student 1 10 91% 100% Student 2 9 82% 89% Student 3 8 73% 78% Student 4 7 64% 67% Student 5 6 55% 56% Student 6 5 45% 44% Student 7 4 36% 33% Student 8 3 27% 22% Student 9 2 18% 11% Student 10 1 9% 0%
Thanks for the information. My dashboard has a field called "URL",  AAA.com BBB.com etc. I only want to know which URL the user clicked. How can I run collect command to an index? May I have so... See more...
Thanks for the information. My dashboard has a field called "URL",  AAA.com BBB.com etc. I only want to know which URL the user clicked. How can I run collect command to an index? May I have some references or examples? Thank you. Finally, I just want to match if the user clicks on the URL, let's say "AAA.com", then the "AAA.com" will disappear on the aforesaid dashboard.
Where do you want to log this? You can use the collect command to log the user, which is known through the $env:user$ token. When you click the drilldown you can set a new token that causes the sea... See more...
Where do you want to log this? You can use the collect command to log the user, which is known through the $env:user$ token. When you click the drilldown you can set a new token that causes the search that runs the collect command to run and collect that information to an index you define.  
Add the following format statement to your XML <format type="color" field="Status"> <colorPalette type="expression">case(match(value, "^$$"), "#FF00FF")/colorPalette> </format> and change the fie... See more...
Add the following format statement to your XML <format type="color" field="Status"> <colorPalette type="expression">case(match(value, "^$$"), "#FF00FF")/colorPalette> </format> and change the field name to your field and the colour value (#FF00FF) to the one colour you want