All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @linaaabad! @MuS solution should give you a good start. Please don't use "join" instead use stats .. by as  above. Refer the below for documentation. https://lantern.splunk.com/Splunk_Pl... See more...
Hello @linaaabad! @MuS solution should give you a good start. Please don't use "join" instead use stats .. by as  above. Refer the below for documentation. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Writing_better_queries_in_Splunk_Search_Processing_Language https://conf.splunk.com/watch/conf-online.html?search=PLA1528B#/    
Hi there, without sample events this can be tricky but since you provided the SPL and you join on UserAccountId I assume this field is available in both sourcetypes. If this is case, it would be as... See more...
Hi there, without sample events this can be tricky but since you provided the SPL and you join on UserAccountId I assume this field is available in both sourcetypes. If this is case, it would be as simple as   index=salesforce UserAccountId=* sourcetype="sfdc:user" OR ( sourcetype="sfdc:setupaudittrail" Action=suOrgAdminLogin ) | fields list of fields you want | stats values(*) AS * by _time UserAccountId   Hope this helps ... cheers, MuS  
Please add this  | eval foo=0 | foreach max* [ eval foo='<<FIELD>>'] | fields - max* | rename foo AS max at the end of your SPL  
Hey there! Have you tried executing this use case via no-code automation platforms? I know that Albato has an Integrator that can be used on the free plan. Furthermore, they have a library with se... See more...
Hey there! Have you tried executing this use case via no-code automation platforms? I know that Albato has an Integrator that can be used on the free plan. Furthermore, they have a library with several apps already available: https://albato.com/apps
ok but max is a value that I get from the index and not a value that I attribute.  My problem is that the value I get from the index is the same for all 3 LPARs, I only want to display it 1 time.  
Hi there, if your max value is static, you could do something like this: index=_internal sourcetype=* | timechart span=1h count by sourcetype | eval max=10000000 and this will produce 1 max line o... See more...
Hi there, if your max value is static, you could do something like this: index=_internal sourcetype=* | timechart span=1h count by sourcetype | eval max=10000000 and this will produce 1 max line on the graph like this:   Hope this helps ... cheers, MuS
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the spec... See more...
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the specific variable fields. Is there a way I can do that but for all title regardless of title and variable fields?     
Usually (as always, it's a general rule of thumb; impossible to say without a detailed knowledge of your environment and data; YMMV and all the standard disclaimers) fiddling with search concurrency ... See more...
Usually (as always, it's a general rule of thumb; impossible to say without a detailed knowledge of your environment and data; YMMV and all the standard disclaimers) fiddling with search concurrency is not the way to go. You can't get more computing power to run your searches that you have raw performance in your hardware. So even if you raise the concurrency splunk will be able to spawn more processes with searches but they will starve each other of resources because there's only so much iron underneath to use. So check what is eating up your resources, disable unneeded searches, optimize the needed ones, teach your users to write effective searches and so on.
Hey @Meett , this does not solve the issue, I think the culprit is what I've shared in my own comment/reply?
Hello how can I display only 1 value of these 3 "maxCapacitMachine" results (which are the same in all 3 cases) in a BY timechart?  
Currently without hitting the submit button,when i load the dashboard it get result  of the below usage statistics of the selected test environment  Query used first checks  if  env is selected "t... See more...
Currently without hitting the submit button,when i load the dashboard it get result  of the below usage statistics of the selected test environment  Query used first checks  if  env is selected "test"  then use "index" as "np-ap" and set "stageToken" as "test"  I want  the submit button to work and get the result only after the env  ,data entity  and date is selected and hit the submit button index="np-ap" AND source="--a-test" <query>index=$indexToken$ AND source="-a-$stageToken$"   <form version="1.1" theme="dark"> <label> stats</label> <fieldset submitButton="true"> <input type="dropdown" token="indexToken1"> <label>Environment</label> <choice value="pd-ap,prod">PROD</choice> <choice value="np-ap,test">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> </input> <input type="dropdown" token="entityToken"> <label>Data Entity</label> <choice value="aa">aa</choice> <choice value="bb">bb</choice> <choice value="cc">cc</choice> <choice value="dd">dd</choice> <choice value="ee">ee</choice> <choice value="ff">ff</choice> <default>aa</default> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html id="APIStats"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">API</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique</title> <search> <query>index=$indexToken$ AND source="-a-$stageToken$" | stats count </query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files... See more...
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files % Directories" (Which I thought I would find it in there) and the rest of the Data Inputs, but can't seem to locate it anywhere. A side question   I tried creating a new Files % Directories Data Input by putting the full Linux path like below: //HostName/var/www/html/PIM/var/log/webservices/* But It says Path can't be empty.  I'm sure this is probably not how you format a Linux path, just couldn't find what I'm doing wrong. Thanks for any help at all, Newb      
@sainag_splunk is correct.  This has to be a bug in 9.2.  I'm about to upgrade to 9.3, so I rushed a bunch of tests.  The results suggest that it has something to do with search results or with input... See more...
@sainag_splunk is correct.  This has to be a bug in 9.2.  I'm about to upgrade to 9.3, so I rushed a bunch of tests.  The results suggest that it has something to do with search results or with input.   9.2.2 9.3.1 Basic search like makeresults, tstats, no input No problem No problem Some complex searches, with inputs No problem (Not tested) Latest dashboard with some other searches, similar inputs Cannot Open in Search (N/A) Code copy of problematic dashboard Cannot Open in Search No problem Recreattion of problematic dashboard Cannot Open in Search (N/A) So, the last two rows are really interesting and took quite some time.  I copied the entire JSON from a problematic dashboard to a test instance running 9.3.1 that has similar test data, and saw no problem.  Then, I tried several methods to recreate that problematic dashboard in the 9.2.2 instance.  First, I simply copied JSON to a new test board and saw the same problem.  I thought there might be something wrong with the code.  So, I copied individual searches and inputs, in two different ways.  They all give the same problematic results.
That's not much for anyone to work with.  Have you checked splunkd.log?  What did you find there?
This could be an issue with a SAML provider configuration. Please check on your SAML configuration and any authentication extensions are set up correctly. Some scripts require arguments that may be ... See more...
This could be an issue with a SAML provider configuration. Please check on your SAML configuration and any authentication extensions are set up correctly. Some scripts require arguments that may be case sensitive.   If this Helps, Please UpVote.
Hi @PickleRick @richgalloway  My number of delayed search has increased upto 5000plus. I did some investigation and using this command- index=_internal sourcetype=scheduler savedsearch_name=* statu... See more...
Hi @PickleRick @richgalloway  My number of delayed search has increased upto 5000plus. I did some investigation and using this command- index=_internal sourcetype=scheduler savedsearch_name=* status=skipped | stats count by reason I see the error "The maximum number of concurrent historical scheduled searches on this cluster has been reached" has 2000 plus count. Two solution to fix this that I have understood is- 1. Staggering the searches that are causing the error by modifying the cron schedule and change the frequency. 2. to increase the search concurrency limit under limits.conf (pls feel free to correct if I am wrong) Since I am on splunk cloud, I understand I don't have access to limits.conf. What I want to ask is I see an option under Settings>Server Settings> Search Preference>Relative concurrency limit for scheduled searches which is set as 60 for my system. Will increasing this setting help, if yes, to what value is it safe to increase. Please help, I am stuck in this problem from some days    
Below search might be helpful. index=_* AND (SMTP OR sendemail OR email) AND (FAIL* OR ERR* OR TIMEOUT OR CANNOT OR REFUSED OR REJECTED)   Sample errors: ERROR sendemail:. ... ........whi... See more...
Below search might be helpful. index=_* AND (SMTP OR sendemail OR email) AND (FAIL* OR ERR* OR TIMEOUT OR CANNOT OR REFUSED OR REJECTED)   Sample errors: ERROR sendemail:. ... ........while sending mail to: If this helps,  please UpVote.
Looking back through the documentation, back to 7.0.0 which is as far back as I can find, it has been recommended that base searches are transforming searches https://docs.splunk.com/Documentation/S... See more...
Looking back through the documentation, back to 7.0.0 which is as far back as I can find, it has been recommended that base searches are transforming searches https://docs.splunk.com/Documentation/Splunk/7.0.0/Viz/Savedsearches#Post-process_searches_2  
KV_MODE has nothing to do with line breaking. And I'd expect that you simply don't have properly set up line breaker and you have line merging enabled. Which results in Splunk splitting your input s... See more...
KV_MODE has nothing to do with line breaking. And I'd expect that you simply don't have properly set up line breaker and you have line merging enabled. Which results in Splunk splitting your input stream at each line and then merges the lines back (which is also very ineffective performancewise).