All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@addOnGuy - I don't think there is any direct way to get the alert description. So you would need to make API REST call to saved/searches endpoint to find all the details about the alert. https://do... See more...
@addOnGuy - I don't think there is any direct way to get the alert description. So you would need to make API REST call to saved/searches endpoint to find all the details about the alert. https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D https://<host>:<mPort>/services/saved/searches/{name} As you mentioned you got the alert name already, replace it here and you will get the other details including the description.   I hope this helps!!!
Theoretically, of course a reverse-proxy could affect a URI path of the request redirecting it into somewhere else on the backend but that would need to be explicitly configured. Anyway see the http... See more...
Theoretically, of course a reverse-proxy could affect a URI path of the request redirecting it into somewhere else on the backend but that would need to be explicitly configured. Anyway see the https://docs.splunk.com/Documentation/Splunk/latest/RESTUM/RESTusing document and see the remark about namespaces on https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTREF/RESTlist
The recommendation would be to get a decent security team. This "finding" is completely false. Firstly, the current openssl version for UF 9.1 is at least 1.0.2zg-fips. Secondly, UF doesn't contain... See more...
The recommendation would be to get a decent security team. This "finding" is completely false. Firstly, the current openssl version for UF 9.1 is at least 1.0.2zg-fips. Secondly, UF doesn't contain the c_rehash script so even with the "vulnerable" version the UF as a whole was not vulnerable. Sending out "findings" based just on recognized versions of software is really a very low-effort vulnerability "management".
@splunkreal - Did you got to resolve your issue?
@selvam_sekar - Here is the query with slight modification. Though in my case even with original query I'm getting the right count for today and yesterday. basesearch earliest=-4d@d latest=now | bi... See more...
@selvam_sekar - Here is the query with slight modification. Though in my case even with original query I'm getting the right count for today and yesterday. basesearch earliest=-4d@d latest=now | bin span=1d _time | search NOT date_wday="saturday" OR date_wday="sunday" | stats count by Name, _time | streamstats current=f window=1 last(count) as Yesterday by Name | rename count as Today | where strftime(_time, "%F")==strftime(now(), "%F") | stats first(*) as * by Name | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Today Yesterday percentage_variance   I hope this helps!!! Kindly upvote & accept the answer if it does!!!  
Ok. Two things. 1. You can use the map command to iterate over results of one search and spawn searches based on those results. But this is usually not the proper way to go. The map command should o... See more...
Ok. Two things. 1. You can use the map command to iterate over results of one search and spawn searches based on those results. But this is usually not the proper way to go. The map command should only be used if there is really no other way. Normally you use subsearches, which are executed before the main search and results from which are rendered as conditions to the main search. 2. But the main question is not how to use two searches here but what do you want to get from your data because often there is a completely different, more "splunky" and better performing solution to the problem.
@viaykiroula - Yes @RichMahlerwein is right usually this happens because you might have configured a custom cert and did not add appsCA.pem file content into your caCert.pem and serverCertChain.pem f... See more...
@viaykiroula - Yes @RichMahlerwein is right usually this happens because you might have configured a custom cert and did not add appsCA.pem file content into your caCert.pem and serverCertChain.pem file. It should work after you add the content to these certificates.  
Hi @vihshah, the search you shared produces only one event, so I suppose it isn't your main search. In addition, the eval and search in rows 5 and 6 haven't any sense. I suppose that your main sea... See more...
Hi @vihshah, the search you shared produces only one event, so I suppose it isn't your main search. In addition, the eval and search in rows 5 and 6 haven't any sense. I suppose that your main search is : sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id and that you want to filter your results for a secondary search, what's this search? Ciao. Giuseppe
Create your search as a base search without a panel <form version="1.1"> <label>Base search</label> <search id="base"> <query> your search goes here </query> <done> <set token="sid... See more...
Create your search as a base search without a panel <form version="1.1"> <label>Base search</label> <search id="base"> <query> your search goes here </query> <done> <set token="sid">$job.sid$</set> <done> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <panel> <table> <search> <query>| loadjob $sid$
Well, you can hide a panel (if not any other way, you can just use css to make it invisible; I assume we're talking about simpleXML, not Dashboard Studio). But if your base search just returns a huge... See more...
Well, you can hide a panel (if not any other way, you can just use css to make it invisible; I assume we're talking about simpleXML, not Dashboard Studio). But if your base search just returns a huge load of events, you might run into performance issues anyway. The idea of base searches is that you use them to get data which is already relatively well aggregated and use the posprocessing searches to aggregate it further. For example you do | stats count by field1 field2 as your base search and then you can do two separate postprocessing searches summarizing data separately over either of those fields. But the base search gives you already relatively well-trimmed data. You can get away with returning a small set of events from the base search but it can escalate quickly...
Hi, I have a dashboard with 91 panels in different rows. The first panel is a panel created for the sole purpose of doing a base search. The search is simple: index=myIndex The other 90 panels... See more...
Hi, I have a dashboard with 91 panels in different rows. The first panel is a panel created for the sole purpose of doing a base search. The search is simple: index=myIndex The other 90 panels all do subsearches on this main search. This is to prevent CPU spikes. The problem that I'm facing right now is that the first panel is showing in the dashboard but it serves no purpose being showed. My question: How do I hide (not remove) this panel visually?  
You can schedule a dashboard as a PDF which is why I showed how you can determine which fields are used from the search in the dashboard - you could also include a table in the dashboard using the sa... See more...
You can schedule a dashboard as a PDF which is why I showed how you can determine which fields are used from the search in the dashboard - you could also include a table in the dashboard using the same results <dashboard version="1.1" theme="light"> <label>Test</label> <row> <panel> <chart> <search> <query>| makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com"</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="sid">$job.sid$</set> </done> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="charting.data.fieldShowList">[Name,GPA]</option> </chart> </panel> </row> <row> <panel> <table> <search> <query>| loadjob $sid$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </table> </panel> </row> </dashboard>
Hi @gcusello , thanks for clarification, the one I posted is my main search, I am stuck at second search
The sendemail.py script responsible for sending the emails just creates a single session and sends a single email to your configured SMTP server. The SMTP server is then responsible for sending the e... See more...
The sendemail.py script responsible for sending the emails just creates a single session and sends a single email to your configured SMTP server. The SMTP server is then responsible for sending the email away. Anyway, if your emails are sent to three separate addresses on gmail, how do they land in admin's mailbox? You didn't mention anything about specifying a Cc: address in your sendemail command.
Hi @vihshah , in few words, you have to use the secondary search as a subsearch of the main. to be more detailed, please share you main search and the second search that you want to use to find the... See more...
Hi @vihshah , in few words, you have to use the secondary search as a subsearch of the main. to be more detailed, please share you main search and the second search that you want to use to find the field for filtering, in your question you shared only one search: sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 | eval req_query = request_id | search req_query it's the main search or the secondary? Ciao. Giuseppe
Hi @gcusello , sorry I did not get you exactly..what should I do?
This is a third-party developed add-on with no docs publicly available so I don't think you'll get much help regarding this specific app. But the "connection reset by peer" typically means that eith... See more...
This is a third-party developed add-on with no docs publicly available so I don't think you'll get much help regarding this specific app. But the "connection reset by peer" typically means that either the other side doesn't accept connections or there is some misconfiguration causing the server to forcibly close the connection. You have to look for additional errors and messages in the addon's log (if it generates any) and on the source side (if you have access).
I think you're talking about two different things. @ITWhispereris showing you how to create a dashboard showing what you want whereas you want a report, which is simply a scheduled search. I don't t... See more...
I think you're talking about two different things. @ITWhispereris showing you how to create a dashboard showing what you want whereas you want a report, which is simply a scheduled search. I don't think you can do this in just a report. The report lets you manage some settings of the visualization but the visualized data is the full set of results that you get in the results table.
For managing licenses you use License Manager. Sometimes it can be an additional role on the same machine that works as Cluster Manager but it's not recommended. So there are two different things he... See more...
For managing licenses you use License Manager. Sometimes it can be an additional role on the same machine that works as Cluster Manager but it's not recommended. So there are two different things here. And if your indexers are still connected (or at least configured for) the old LM and remember the license they were granted from the old LM. And since the new CM (even if it's configured to work as LM as well) has not apparently granted the license to the indexers, you'll have a clash.
Look at this run-anywhere example (yes, it's ugly but it seems to work. | makeresults count=100 | eval val1=random() % 4, val2=random()%3 | streamstats window=1 current=f values(val*) as previous... See more...
Look at this run-anywhere example (yes, it's ugly but it seems to work. | makeresults count=100 | eval val1=random() % 4, val2=random()%3 | streamstats window=1 current=f values(val*) as previous_val* | eval changed="" | foreach * [ eval changed=mvappend(changed,if(like("<<FIELD>>","previous_%") OR '<<FIELD>>'='previous_<<FIELD>>',null(),"Field: <<FIELD>> oldval: ".'previous_<<FIELD>>'." newval: ".'<<FIELD>>'))] EDIT: Added "by id" to streamstats because I'd forgotten it before. EDIT2: Removed "by id" it made sense in the context of the original question but didn't in case of this mockup data.