All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That is what we thought. We are looking for a better solution to avoid cloning the report if it is possible.
I just solved the problem by having both pass4SymmKey and pass4SymmKey_Length under the clustering stanza like below: [clustering] pass4SymmKey = some keys pass4SymmKey_Length = 24 You can make y... See more...
I just solved the problem by having both pass4SymmKey and pass4SymmKey_Length under the clustering stanza like below: [clustering] pass4SymmKey = some keys pass4SymmKey_Length = 24 You can make your key length to be at least 12. I made mine to be 24 and my keys to be longer.
Your license measures breaks down by index for daily usage.  Just check the DMC for the reports.
What happens when you tried my solution?
Don't stats. Just look for raw events. If you have them, the problem is probably in parsing. If you don't, search why you didn't get them ingested properly.
There isn't a search that can't be made uglier with foreach XD | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Se... See more...
There isn't a search that can't be made uglier with foreach XD | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | transpose 0 header_field=UpgradeStatus | fields - column | eval Total=0 | foreach * [ eval Total=Total+<<FIELD>> ]  As an alternative you can also use appendpipe | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | appendpipe [ stats sum(count) as count | eval UpgradeStatus="Total" ] | transpose 0 header_field=UpgradeStatus
Finally after a lot of testing I found a solution via transforms.conf   [timestamp-fix] INGEST_EVAL= _time=json_extract(_raw,"instant.epochSecond").".".json_extract(_raw,"instant.nanoOfSecond")   ... See more...
Finally after a lot of testing I found a solution via transforms.conf   [timestamp-fix] INGEST_EVAL= _time=json_extract(_raw,"instant.epochSecond").".".json_extract(_raw,"instant.nanoOfSecond")   Furthermore, it turned out that regex is not allowed in TIME_FORMAT field in props.conf.
Hi, The token element works well but when no has been selected from the filter, nothing extra is added to the code. I was wondering how I can stop the graph from being split in two when no is selected
I am having the same issue as yours. Did you ever get to solve this problem? If so, what was the solution? @splukiee 
Thanks @PaulPanther . This helps 
Depending on how you have "removed" the timewrap command you could have a token which starts and ends a comment (```)   index=foo $comment$ [| makeresults | fields - _time | addinfo | eval day=mvr... See more...
Depending on how you have "removed" the timewrap command you could have a token which starts and ends a comment (```)   index=foo $comment$ [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest] $comment$ | timechart span=1m sum(value) as value | eval _time=_time $comment$ | timewrap 1d $comment$  
| makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Ser... See more...
| makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | eventstats count as Total | chart count by Total UpgradeStatus
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single ... See more...
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single row resultset which tells me how many events of each UpgradeStatus and how  many events in total i.e. Total Completed Pending Processing 11 6 3 2   I don't know in advance what the different values of UpgradeStatus might be and I don't want to use addtotals (this is the challenge part). I came up with the solution below which kinda "misuses" xyseries (which I'm strangely proud of) .  I feel like I'm missing a more straightforward solution, other than addtotals   Anyone up for the challenge? Dummy data and solution (misusing xyseries) follows...   | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | eventstats sum(count) as Total | xyseries Total UpgradeStatus count        
An extension of this: https://community.splunk.com/t5/Splunk-Search/Looking-at-yesterdays-data-but-need-to-filter-the-data-to-only/m-p/696758#M236798   I've created a dashboard on the above with a... See more...
An extension of this: https://community.splunk.com/t5/Splunk-Search/Looking-at-yesterdays-data-but-need-to-filter-the-data-to-only/m-p/696758#M236798   I've created a dashboard on the above with an input that adds the timewrap line when the option is selected yes and nothing when the option is selected no.   The issue I am having is when no is selected, the graph looks like the following when I select smaller time windows. Below I selected 4 hours but how can I only show the last 4 hours and not the previous window.   Query is as follows: index=foo  [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest] | timechart span=1m sum(value) as value | eval _time=_time  
You could try protecting access to the lookup by putting in a kv store and accessing it through a custom command. This custom command would be in an app which is protected from "casual" users with pa... See more...
You could try protecting access to the lookup by putting in a kv store and accessing it through a custom command. This custom command would be in an app which is protected from "casual" users with particular roles and permissions. The custom command would return the matching word without disclosing the contents of the lookup. This is not a trivial solution but may at least go some way to meeting your requirement.
Hi @msalghamdi , it isn't so immediate if you want to search on all the raw events, if instead you want to search on a predefined field it's easier. In the second case you can use the lookup comman... See more...
Hi @msalghamdi , it isn't so immediate if you want to search on all the raw events, if instead you want to search on a predefined field it's easier. In the second case you can use the lookup command, something like this: <your_search> | lookup your_lookup.csv your_key OUTPUT your_key AS found_key In the other case there was a solution from @somesoni2 to my same requirement of around 10 years ago, very hard to remember. Ciao. Giuseppe
thanks for the prompt response.   our risk team wants to provide a list of critical project keyword which will be stored as a lookup, and we'd search the DLP logs for any match  in the lookup, but ... See more...
thanks for the prompt response.   our risk team wants to provide a list of critical project keyword which will be stored as a lookup, and we'd search the DLP logs for any match  in the lookup, but the require that the analyst shouldn't have the ability to view the lookup which means that the analyst wouldn't know what keyword matched if the DLP captured more that a file/keyword in one log, so we thought if maybe there's a way we can highlight matched keywords in the search
Dear MiniNenya, According to your explain, how did you calculate "average amount of data ingested by each index" ? Sincerely, Benny On  
We have developed an add-on to pull Audit logs from Zabbix.: https://splunkbase.splunk.com/app/5272   Check this and let us know at splunk.support@dataelicit.com if you are facing any issue.
What type of Integration you are looking for? Looking to get data from Splunk to Zabbix Or Zabbix to Splunk or anything else?