All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello.   the field in my search is "file_name" while the field in the lookup is called "phrase", i tried to use this but it didnt work: | lookup my_lookup.csv phrase OUTPUT file_name AS fou... See more...
Hello.   the field in my search is "file_name" while the field in the lookup is called "phrase", i tried to use this but it didnt work: | lookup my_lookup.csv phrase OUTPUT file_name AS found_key
I tend to just use the by clause as the first mentioned field is used for the over field, but that's just a matter of style / preference. Since you can only specify two fields on a chart command, ove... See more...
I tend to just use the by clause as the first mentioned field is used for the over field, but that's just a matter of style / preference. Since you can only specify two fields on a chart command, over and by is probably clearer. In this example, the eventstats is a way of providing a single value for the "over" field so that you get a single row of statistics.
Compare the contents of the "splunk/etc/apps/splunk-rolling-upgrade" against the manifest file.  Some previous files may be left behind after the upgrade and triggering the python 2 warning.  If any ... See more...
Compare the contents of the "splunk/etc/apps/splunk-rolling-upgrade" against the manifest file.  Some previous files may be left behind after the upgrade and triggering the python 2 warning.  If any have the same name you may need to manually copy from the v9 Splunk extracted to a safe location.
Alternatively you could just to a very ugly hack and name your field so it will be iterated over at the end and knowing it will be added to itself just divide it in half
Aha!  That's how you avoid the double-count.  I've encountered this before.  Will definitely put this on file.  Thank you.
Thank you for your feedback. We appreciate your engagement and aim to provide the best assistance possible based on the information shared in the community forum. The query provided was our best eff... See more...
Thank you for your feedback. We appreciate your engagement and aim to provide the best assistance possible based on the information shared in the community forum. The query provided was our best effort to address your question using the details you offered. Community members, including myself, volunteer our time and knowledge to help fellow Splunk users. If you need more tailored assistance or have specific requirements not covered in the initial query, there are a couple of options: You can provide more detailed information about your data structure and exact requirements in the forum. This would help us refine the solution further. For more in-depth, real-time support where you can share your screen and get personalized guidance, Splunk offers ondemand services. These allow for "shoulder surfing" and can address your specific needs more directly.  Regarding the query itself, you can adapt the latter part to your specific environment as mentioned by @yuanliu  or me.  | stats count as total_servers, count(eval(status_field="YourCompletedStatus")) as completed_count, count(eval(status_field="YourPendingStatus")) as pending_count | eval completed_percentage = round(completed_count / total_servers * 100, 0) | eval pending_percentage = round(pending_count / total_servers * 100, 0) | eval "Completed Servers" = completed_count . " (" . completed_percentage . "%)" | eval "Pending Servers" = pending_count . " (" . pending_percentage . "%)" | fields "Completed Servers", "Pending Servers" Replace index, source, sourcetype, status_field, YourCompletedStatus, and YourPendingStatus with your specific values. This should work with your actual data structure. We're here to help, and we hope this guidance proves useful for your specific use case. Thanks.
Oh @ITWhisperer That is sweet. It helps me if I specify the over / by designation ... but hats off to you. | eventstats count as Total | chart count over Total by UpgradeStatus Not looking forward... See more...
Oh @ITWhisperer That is sweet. It helps me if I specify the over / by designation ... but hats off to you. | eventstats count as Total | chart count over Total by UpgradeStatus Not looking forward to explaining to anyone how it works ... but work it does.  (I'll add it to my collection of misusing a command to get a result.   Got any others? )
Right you are. As usual, forgot about it. Here's fixed version. | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed ... See more...
Right you are. As usual, forgot about it. Here's fixed version. | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | transpose 0 header_field=UpgradeStatus | fields - column | eval Total=0 | foreach * [ eval Total=Total+if("<<FIELD>>"=="Total",0,<<FIELD>>) ]
Hi @jroedel , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Nice work @PickleRick !   Am I missing something?  I had previously tried with the delightful foreach command earlier but I can't get it to avoid double-counting the Total field.  So I end up with t... See more...
Nice work @PickleRick !   Am I missing something?  I had previously tried with the delightful foreach command earlier but I can't get it to avoid double-counting the Total field.  So I end up with the Total equalling 22 - not the correct result of 11.  I get the same thing with your solution as well.
That is what we thought. We are looking for a better solution to avoid cloning the report if it is possible.
I just solved the problem by having both pass4SymmKey and pass4SymmKey_Length under the clustering stanza like below: [clustering] pass4SymmKey = some keys pass4SymmKey_Length = 24 You can make y... See more...
I just solved the problem by having both pass4SymmKey and pass4SymmKey_Length under the clustering stanza like below: [clustering] pass4SymmKey = some keys pass4SymmKey_Length = 24 You can make your key length to be at least 12. I made mine to be 24 and my keys to be longer.
Your license measures breaks down by index for daily usage.  Just check the DMC for the reports.
What happens when you tried my solution?
Don't stats. Just look for raw events. If you have them, the problem is probably in parsing. If you don't, search why you didn't get them ingested properly.
There isn't a search that can't be made uglier with foreach XD | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Se... See more...
There isn't a search that can't be made uglier with foreach XD | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | transpose 0 header_field=UpgradeStatus | fields - column | eval Total=0 | foreach * [ eval Total=Total+<<FIELD>> ]  As an alternative you can also use appendpipe | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | appendpipe [ stats sum(count) as count | eval UpgradeStatus="Total" ] | transpose 0 header_field=UpgradeStatus
Finally after a lot of testing I found a solution via transforms.conf   [timestamp-fix] INGEST_EVAL= _time=json_extract(_raw,"instant.epochSecond").".".json_extract(_raw,"instant.nanoOfSecond")   ... See more...
Finally after a lot of testing I found a solution via transforms.conf   [timestamp-fix] INGEST_EVAL= _time=json_extract(_raw,"instant.epochSecond").".".json_extract(_raw,"instant.nanoOfSecond")   Furthermore, it turned out that regex is not allowed in TIME_FORMAT field in props.conf.
Hi, The token element works well but when no has been selected from the filter, nothing extra is added to the code. I was wondering how I can stop the graph from being split in two when no is selected
I am having the same issue as yours. Did you ever get to solve this problem? If so, what was the solution? @splukiee 
Thanks @PaulPanther . This helps