All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @revanthammineni, if you see in Splunk baseline (https://splunkbase.splunk.com) there are many apps for Jira integration. Probably the one for you is the Jira App ( https://splunkbase.splunk.com... See more...
Hi @revanthammineni, if you see in Splunk baseline (https://splunkbase.splunk.com) there are many apps for Jira integration. Probably the one for you is the Jira App ( https://splunkbase.splunk.com/app/5806 ) but there are also others. Ciao. Giuseppe
Hi @Dustem , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Point... See more...
Hi @Dustem , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @mohammadsharukh, if I correctly remember, there's a sample of a shourt living account in the Splunk Security Essential App, that I hint. Anyway, don't use the transaction command because it's v... See more...
Hi @mohammadsharukh, if I correctly remember, there's a sample of a shourt living account in the Splunk Security Essential App, that I hint. Anyway, don't use the transaction command because it's very slow, please try this search: sourcetype=wineventlog (EventCode=4726 OR EventCode=4720) | stats earliest(eval(EventCode=4720)) AS earliest latest(eval(EventCode=4726)) AS latest values(dest) AS dest values(src_user) AS src_user values(Account_Domain) AS Account_Domain BY user | eval diff=latest-earliest, creation_time=strftime(earliest,"%Y-%m-%d %H:%M:%S"), deletion_time=strftime(latest,"%Y-%m-%d %H:%M:%S") | where diff<240*60 | table creation_time deletion_time dest EventCode user src_user Account_Domain Ciao. Giuseppe
Hi, I need to be able to see what inputs people are entering into dashboard panels (i.e. what filters they are applying or searching for). So far I have used the _audit and _internal indexes to ... See more...
Hi, I need to be able to see what inputs people are entering into dashboard panels (i.e. what filters they are applying or searching for). So far I have used the _audit and _internal indexes to be able to see which users have accessed which saved-searches on each dashboard, but have not been able to identify the input that they have entered. I am hoping to create a dashboard table for this. Thanks.
I am working to create a use case to detect account created and deleted within short period of time Could you please give a simple example how connected true/false will affect results of transaction... See more...
I am working to create a use case to detect account created and deleted within short period of time Could you please give a simple example how connected true/false will affect results of transaction command. I already referred previous answer but didnt understand the explanation. Addionally also explain what is the affect of connected=true/false in the below query and also what is the best practice. sourcetype=wineventlog (EventCode=4726 OR EventCode=4720)  | transaction user maxspan=240m startswith="EventCode=4720" endswith="EventCode=4726" connected=false| table Time, dest, EventCode, user, src_user, Account_Domain @Ledion_Bitincka   @richgalloway 
So my first SPL, it gets me the URLs I'm looking for but doesn't list the URLs (in the lookup) that don't get any results. So to break it down, say my csv has 120 URLs and I would like to know what ... See more...
So my first SPL, it gets me the URLs I'm looking for but doesn't list the URLs (in the lookup) that don't get any results. So to break it down, say my csv has 120 URLs and I would like to know what my users are hitting and not hitting, when i run my first example search I'm getting 43 results back, but I also want to know what are the URLs (out of the 120) not hitting? Or should I create a search only telling me the URLs that don't match in the index? Whilst that example does make sense, in the stats table I get all these other URLs that aren't even in my lookup list that I care about.  So I go from expecting 120 results to (18,000). Does that make more sense?
This does not sound right.  First of all, why should ThreeDSecureResult and description be one field? They are different parts of a data structure.  Second, Splunk should have given you  ThreeDSe... See more...
This does not sound right.  First of all, why should ThreeDSecureResult and description be one field? They are different parts of a data structure.  Second, Splunk should have given you  ThreeDSecureResult{@description} Failed
How do I rename/conjoin/remove the space between the field "ThreeDSecureResult" and "description"? The value is coming up as 'description' rather than 'Failed' when I try to table it on Splunk. <Thr... See more...
How do I rename/conjoin/remove the space between the field "ThreeDSecureResult" and "description"? The value is coming up as 'description' rather than 'Failed' when I try to table it on Splunk. <ThreeDSecureResult description="Failed"/>
Would like to run a scan on backend and look for "*M5*-CLDB" or any combination of M5 and CLDB. We have Splunk Distributed environment, indexer and search head clusters. Saved searches, lookups, Dash... See more...
Would like to run a scan on backend and look for "*M5*-CLDB" or any combination of M5 and CLDB. We have Splunk Distributed environment, indexer and search head clusters. Saved searches, lookups, Dashboards which needs to be modified due to the cluster name change. Could someone share your thoughts on the same.
Maybe you can explain the requirement further?  You said of your original SPL "it's only counting the matches, i need the URLs that don't exist to count 0."  So, I thought you would want every url no... See more...
Maybe you can explain the requirement further?  You said of your original SPL "it's only counting the matches, i need the URLs that don't exist to count 0."  So, I thought you would want every url not with a match.  Could you mock up some data in web_index and explain why the output doesn't meet the requirements? Here is my emulation   | makeresults | fields - _time | eval url = mvappend("google.com", "foo.com", "bar.com", "google.com") | mvexpand url ``` the above emulates index="web_index" ```   url google.com foo.com bar.com google.com Using the exact mock lookup you give, my search will give url count bar.com 0 foo.com 0 google.com 2 Is this not what you want, bar.com and foo.com count to 0? Below is the full example   | makeresults | fields - _time | eval url = mvappend("google.com", "foo.com", "bar.com", "google.com") | mvexpand url ``` the above emulates index="web_index" ``` | lookup URLs.csv kurl as url output kurl as match | eval match = if(isnull(match), 0, 1) | stats sum(match) as count by url    
@niketn  how do i make the domain entity to be multiselect with value1: material,value 2:sm,value 3:all then my inputToken &outputtoken are different for each data entity selection,,,how can i pa... See more...
@niketn  how do i make the domain entity to be multiselect with value1: material,value 2:sm,value 3:all then my inputToken &outputtoken are different for each data entity selection,,,how can i pass two token if used in multiselect to two different search query <row> <panel> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="goodsdevelopment">goodsdevelopment</choice> <choice value="materialdomain">materialdomain</choice> <choice value="costsummary">costsummary</choice> <change> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default></default> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults   | eval goodsdevelopment="airbag",materialdomain="material,sm",costsummary="costing"</query>         </search>         <change>           <condition match="$label$==&quot;airbag&quot;">             <set token="inputToken">airbagSizeScheduling</set>             <set token="outputToken">goodsdevelopment</set>           </condition>           <condition match="$label$==&quot;costing&quot;">             <set token="inputToken">costSummary</set>             <set token="outputToken">costing</set>           </condition>           <condition match="$label$==&quot;material&quot;">             <set token="inputToken">material</set>             <set token="outputToken">md</set>           </condition>         </change>       </input>
how do i make the domain entity to be multiselect with value1: material,value 2:sm,value 3:all then my inputToken &outputtoken are different for each data entity selection,,,how can i pass two tok... See more...
how do i make the domain entity to be multiselect with value1: material,value 2:sm,value 3:all then my inputToken &outputtoken are different for each data entity selection,,,how can i pass two token if used in multiselect to two different search query <row> <panel> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="goodsdevelopment">goodsdevelopment</choice> <choice value="materialdomain">materialdomain</choice> <choice value="costsummary">costsummary</choice> <change> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default></default> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults   | eval goodsdevelopment="airbag",materialdomain="material,sm",costsummary="costing"</query>         </search>         <change>           <condition match="$label$==&quot;airbag&quot;">             <set token="inputToken">airbagSizeScheduling</set>             <set token="outputToken">goodsdevelopment</set>           </condition>           <condition match="$label$==&quot;costing&quot;">             <set token="inputToken">costSummary</set>             <set token="outputToken">costing</set>           </condition>           <condition match="$label$==&quot;material&quot;">             <set token="inputToken">material</set>             <set token="outputToken">md</set>           </condition>         </change>       </input>
Hi Splunkers, I need to send 50 reports out of a splunk query to my team members every month. Currently I’m taking them manually and distributing them.  we are planning to automate this proce... See more...
Hi Splunkers, I need to send 50 reports out of a splunk query to my team members every month. Currently I’m taking them manually and distributing them.  we are planning to automate this process by sending them to Jira and open tickets with the necessary data and assign them to people.  What should I do in order to achieve this? Is there a proper Splunk Jira add-on in Splunk base? Any recommendations would be helpful. TIA
Hello gcusello, Sorry, I forgot to reply. I rewrote SPL myself to complete the requirements, thanks for your help.
Ah wonderful, thanks so much!
That also returned every url not on my lookup.  I guess I could just take the hits and do a duplicate compare in excel but it'd be nice to see it all in splunk. Yeah, sorry about the tag I wasn't pa... See more...
That also returned every url not on my lookup.  I guess I could just take the hits and do a duplicate compare in excel but it'd be nice to see it all in splunk. Yeah, sorry about the tag I wasn't paying attention.
Hello, Thank you for your suggestion. On your suggestion, when using index=vulnerability_index, Doesn't the search still correlate 100k IPs with the CSV? Is it possible to create a historical d... See more...
Hello, Thank you for your suggestion. On your suggestion, when using index=vulnerability_index, Doesn't the search still correlate 100k IPs with the CSV? Is it possible to create a historical data or index or DB to bypass the original vulnerability index? For example: index=new_vulnerability_index The old vulnerability_index has total 100k IPs, but this new index only has 2 IPs and 4 rows because it's already pre-calculated ip_address         vulnerability                        score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting       8 192.168.1.2 DNS 5
As the title says, when troubleshooting data ingestion on search heads e.g. running an SPL to check if certain data is landing to that index or searching through an addon like aws to check its diagno... See more...
As the title says, when troubleshooting data ingestion on search heads e.g. running an SPL to check if certain data is landing to that index or searching through an addon like aws to check its diagnostic logs on the HF. For tests like these, which search heads are best to use, monitoring console or a regular search head or an SHC ?
Pro tip: It is great that you are using makeresults to post simulation.  But do not mangle JSON. The command you are looking for is mvexpand.   | makeresults | eval prediction_str_body="[{\"string... See more...
Pro tip: It is great that you are using makeresults to post simulation.  But do not mangle JSON. The command you are looking for is mvexpand.   | makeresults | eval prediction_str_body="[{\"stringOutput\":\"Alpha\",\"doubleOutput\":0.52},{\"stringOutput\":\"Beta\",\"doubleOutput\":0.48}]" | spath input=prediction_str_body path={} | mvexpand {} | spath input={}   Your sample gives doubleOutput stringOutput {} 0.48 Beta {"stringOutput":"Beta","doubleOutput":0.48} Hope this helps.
Something like index=vulnerability_index [| inputlookup company.csv | stats values(ip_address) as ip_address] | table ip_address, vulnerability, score | lookup company.csv ip_address as ip_a... See more...
Something like index=vulnerability_index [| inputlookup company.csv | stats values(ip_address) as ip_address] | table ip_address, vulnerability, score | lookup company.csv ip_address as ip_address OUTPUTNEW ip_address, company, location Hope this helps