All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The solution becomes more obvious if I restate the problem like this: In addition to colors, you must have another field with four distinct values.  Let's call the additional field group, and give th... See more...
The solution becomes more obvious if I restate the problem like this: In addition to colors, you must have another field with four distinct values.  Let's call the additional field group, and give them values "a", "b", "c", and "d". colors group blue a blue a red a yellow b red b blue c red c blue c red d red d green d green d When data structure is clear, what you are asking is to Find values of colors that appear more than once with each group value. Count how many distinct values of group for each of duplicated values of colors. Hence,   | stats count by colors group | where count > 1 | stats dc(group) as duplicate_count by colors   Here is a data emulation you can play with and compare with real data   | makeresults format=csv data="colors,group blue,a blue,a red,a yellow,b red,b blue,c red,c blue,c red,d red,d green,d green,d" ``` data emulation above ```   String the two together, you get colors duplicate_count blue 2 green 1 red 1
@gcusello @KendallW  I receive the log via UDP from the heavy forwarder connected to the indexer. After setting the sourcetype to temp in the heavy forwarder (inputs), the sourcetype is set to overr... See more...
@gcusello @KendallW  I receive the log via UDP from the heavy forwarder connected to the indexer. After setting the sourcetype to temp in the heavy forwarder (inputs), the sourcetype is set to override according to the host and regular expression. Is it correct to extract timestamps in the heavy forwarder props? No matter how many times I apply the settings you mentioned, it doesn't work. 
When I doing splunkforwarder version upgrade to 9.X which always failed due to below error - Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2024-03-25.18... See more...
When I doing splunkforwarder version upgrade to 9.X which always failed due to below error - Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2024-03-25.18-09-26' -- Error calling execve(): No such file or directory Error launching command: No such file or directory   As per the discussion on the link:   https://community.splunk.com/t5/Installation/Upgrading-Universal-Forwarder-8-x-x-to-9-x-x-does-not-work/m-p/665668  who said we have to enable the tty env option on the docker runtime to successfully bring up Splunkforward9.X Indeed I added the tty config on my docker compose file and it works. But I would say it is werse, bad workaround way to bring up splunkforwarder9.X.  Why the forwarder9.X version force ask for the tty terminal env to run up?  Can we remove this restriction? In many case, we have to bring up splunkforwarder instance within a background program but not in a terminal, and for some case we have to use process manager to control splunkforwarder start resume... Anyway, can we remove the tty restriction for newer splunkforwarder9.X just like what it did on 8.X and 7.X  
Thank you for answer! I tried specifying and applying all the regular expressions as you answered, but it doesn't work. It's difficult...
Hello, Thank you for your answer. I already tried it but it doesn't work. I'll try it one more time!
Hi, I am using Splunk Dashboard with SimpleXML formatting. This is my Current Code for my Dashboard. * Query is masked * The Structure is defined as it is.     <row> <panel> <html d... See more...
Hi, I am using Splunk Dashboard with SimpleXML formatting. This is my Current Code for my Dashboard. * Query is masked * The Structure is defined as it is.     <row> <panel> <html depends="$alwaysHideCSS$"> <style> #table_ref_base{ width:50% !important; float:left !important; height: 800px !important; } #table_ref_red{ width:50% !important; float:right !important; height: 400px !important; } #table_ref_org{ width:50% !important; float:right !important; height: 400px !important; } </style> </html> </panel> </row> <row> <panel id="table_ref_base"> <table> <title>Signals from Week $tk_chosen_start_wk$ ~ Week $tk_chosen_end_wk$</title> <search id="search_ref_base"> <query></query> <earliest>$tk_search_start_week$</earliest> <latest>$tk_search_end_week$</latest> </search> <option name="count">30</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel id="table_ref_red"> <table> <title> (Red) - Critical/Severe Detected (Division_HQ/PG2/Criteria/Value)</title> <search base="search_ref_base"> <query></query> </search> <option name="count">5</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="table_ref_org"> <table> <title>🟠 (Orange) - High/Warning Detected (Division_HQ/PG2/Criteria/Value)</title> <search base="search_ref_base"> <query></query> </search> <option name="count">5</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>       However, my dashboard shows up with this picture below. I thought by defining 800px on the left panel and 400px on both right panel would end up like the preferred dashboard page as above(right), but it gave me a result(left): Here is also the result of my current Dashboard.  As you can see, It also returns me a needless white space below:   Thanks for your help!   Sincerely,  Chung
Thank you I have added double quotes in my lookup for FailureMsg field. Could you please help on how we can write lookup query to search for FailureMsg in _raw ?
Hi You should use “ with your field value name. Otherwise splunk think that your value is field name. r. Ismo
As already mentioned all those numbers are from 0 to something not 1 to something!
7 is a non-standard day number.  Try 0 6,12,20,22 * * 0,6
Another update: my csv lookup in this example has only 2 rows, but it could have many more. Also I am not planning to use other fields Product, Feature but just need FailureMsg
Based on docs this should works. BUT on example part those platform selections have done on app not serverclass level. Maybe you should try that? Btw have you configured this by gui or manually with... See more...
Based on docs this should works. BUT on example part those platform selections have done on app not serverclass level. Maybe you should try that? Btw have you configured this by gui or manually with text editor?
This statement | eval IP_ADDRESS=if(index=index1, interfaces.address, PRIMARY_IP_ADDRESS) will need to have single quotes round the interfaces.address, as eval statements need fields with non-simpl... See more...
This statement | eval IP_ADDRESS=if(index=index1, interfaces.address, PRIMARY_IP_ADDRESS) will need to have single quotes round the interfaces.address, as eval statements need fields with non-simple characters to be single quoted, in this case the full-stop (.) | eval IP_ADDRESS=if(index=index1, 'interfaces.address', PRIMARY_IP_ADDRESS) Note also that index=index1 would need to be index="index1" as you are looking for the value of index to be the string index1 rather than comparing field index to field index1. As for debugging queries, if you just remove the 'where' clause, you can see what you are getting and what the value of indexes is.  Hope this helps
Unfortunately, I don't know if you can make <select> input take custom values, that's more of an html question if you are doing it inside the html panels - I am guessing you can probably make some JS... See more...
Unfortunately, I don't know if you can make <select> input take custom values, that's more of an html question if you are doing it inside the html panels - I am guessing you can probably make some JS to make this this work, but it's a guess.
Yes @naveenalagu you are right re count=1, in this type of solution, you normally set an indicator in each part of the search (outer+append) as @ITWhisperer has shown and then the final stats will do... See more...
Yes @naveenalagu you are right re count=1, in this type of solution, you normally set an indicator in each part of the search (outer+append) as @ITWhisperer has shown and then the final stats will do the evaluation to work out where the data came from.  
Would it fit your use case to set inputs.conf and outputs.conf such that the UF forwards the same logs to two different indexer servers, then those indexer servers have different props.conf which can... See more...
Would it fit your use case to set inputs.conf and outputs.conf such that the UF forwards the same logs to two different indexer servers, then those indexer servers have different props.conf which can mask and not mask the fields?  It seems like props.conf on the UF won't solve your problem.
For people finding this question in the years after 2016, you can set the max_upload_size setting in web.conf [settings] # set to the MB max max_upload_size = 500 # you can also set a larger splunkd... See more...
For people finding this question in the years after 2016, you can set the max_upload_size setting in web.conf [settings] # set to the MB max max_upload_size = 500 # you can also set a larger splunkdConnectionTimeout value so it wont timeout when uploading splunkdConnectionTimeout=600 ref: https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Webconf
If you know which error to look for, or can make a good guess that it includes the word "ingestion", then you could search in the internal logs: index=_internal log_level=error ingestion You coul... See more...
If you know which error to look for, or can make a good guess that it includes the word "ingestion", then you could search in the internal logs: index=_internal log_level=error ingestion You could also make a "maintenance alert" which looks for a drop in logs for an index, source, sourcetype, or some other field. If you expect logs at a certain time but there are zero, then it could be because of a log ingestion error.
we are trying to set up a cron schedule on alert to run only on weekends(sat and sun) at 6am, 12pm, 8pm , 10pm i tired giving below cron in Splunk, it is saying invalid cron, can any one help on thi... See more...
we are trying to set up a cron schedule on alert to run only on weekends(sat and sun) at 6am, 12pm, 8pm , 10pm i tired giving below cron in Splunk, it is saying invalid cron, can any one help on this??  
Sorry for the late reply... Just started back working on this. For anyone who is curious, the answer was the port we were using had less attributes.