All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Test post. Wasn't able to post? Edit: Okay, it works. Yes that is an caveat to bring up. Fortunately, you can use a foreach with an iterator to make each value in the multivalue unique. I'm thin... See more...
Test post. Wasn't able to post? Edit: Okay, it works. Yes that is an caveat to bring up. Fortunately, you can use a foreach with an iterator to make each value in the multivalue unique. I'm thinking it is something like the following. I'm sure its not impossible to add a custom unique identifier to each value in mv field nonetheless.   | eval iterator=0 | foreach <multivalue_field> [eval iterator=iterator+1, <<ITEM>>=iterator."-".<<ITEM>>] ``` Warning: Did not test this yet ```   Then you can perform the reverse stats join, and use split() and mvindex() to parse out your actual values without needing regex! You are correct, I was indeed working with a multivalue of unique identifiers which is why it worked for me.
What is your question? (Subject "splunk" doesn't help narrow it down given that this is a community of Splunk users answering questions about Splunk-related issues!) Please provide a description of ... See more...
What is your question? (Subject "splunk" doesn't help narrow it down given that this is a community of Splunk users answering questions about Splunk-related issues!) Please provide a description of what you are trying to achieve, some anonymised representative sample events, your current results from searches you have tried, and what your expected results would look like (with a description of the logic relating the sample events to the expected output, if appropriate).
Calculate the overall average before the timechart and preserve the value with values aggregate function index=qualys sourcetype=qualys:hostDetection SEVERITY=5 STATUS="FIXED" | dedup HOST_ID, QID |... See more...
Calculate the overall average before the timechart and preserve the value with values aggregate function index=qualys sourcetype=qualys:hostDetection SEVERITY=5 STATUS="FIXED" | dedup HOST_ID, QID | eval MTTR = ceiling(((strptime(LAST_FIXED_DATETIME, "%FT%H:%M:%SZ") - strptime(FIRST_FOUND_DATETIME, "%FT%H:%M:%SZ")) / 86400)) ```| bucket span=1d _time``` | eventstats avg(MTTR) as OVERALL_AVG | timechart span=1d avg(MTTR) as AVG_MTTR_PER_DAY values(OVERALL_AVG) as OVERALL_AVG | streamstats window=7 avg(AVG_MTTR_PER_DAY) as 7_DAY_AVG
This solution only works if all the values in the multivalue field are unique across all instances of the field. For example: | makeresults count=10 | eval mv=mvrange(0,(random()%5)+1) | streamstats... See more...
This solution only works if all the values in the multivalue field are unique across all instances of the field. For example: | makeresults count=10 | eval mv=mvrange(0,(random()%5)+1) | streamstats count as row | stats values(*) as * by mv This produces only 5 events instead of between 10 and 50 events which mvexpand  of mv would have done
hello i have installed DVWA in my xamp server . practiced some Sql attack on DVWA . after that i typed  the following in Splunk search bar   but its showing any result .  index=dvwa_logs (error OR "... See more...
hello i have installed DVWA in my xamp server . practiced some Sql attack on DVWA . after that i typed  the following in Splunk search bar   but its showing any result .  index=dvwa_logs (error OR "SQL Injection" OR "SQL Error" OR "SQL syntax") OR (sourcetype=access_combined status=200 AND (search_field="*' OR 1=1 --" OR search_field="admin' OR '1'='1")) | stats count by source_ip, search_field, host
Events longer than 15.000 characters are truncated now.  We wonder if there is a limit for this (so for example in the configuration the maximum event length can't be set to a number higher than 50... See more...
Events longer than 15.000 characters are truncated now.  We wonder if there is a limit for this (so for example in the configuration the maximum event length can't be set to a number higher than 50.000). Where and how can we change this limit for a certain index.
Hi @Somesh , this seems to be a different question and I hint to open a new question to be more sure to have more and probably better answers. Anyway, Splunk best practices hint to run Splunk as no... See more...
Hi @Somesh , this seems to be a different question and I hint to open a new question to be more sure to have more and probably better answers. Anyway, Splunk best practices hint to run Splunk as not root user, for security reasons, but this gives some additional difficoultes in log reading,  For more additional information see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/RunSplunkasadifferentornon-rootuser Ciao. Giuseppe P.S.: Karma Points are appreciated
Hi @bworrellZP, you could use (only in lab) the syslog network input that doesn't write on disk. Otherwise, use rsyslog, writing syslog on disk and then read these logs using the batch command, ins... See more...
Hi @bworrellZP, you could use (only in lab) the syslog network input that doesn't write on disk. Otherwise, use rsyslog, writing syslog on disk and then read these logs using the batch command, instead monitor, in the inputs.conf. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Inputsconf  In this way logs are deleted soon after ingestion. Ciao. Giuseppe
Hi @VijaySrrie , they should work also using three different calculated fields, anyway, you could nest the conditions from the other calculated fields, even if the final caculated fied will be longe... See more...
Hi @VijaySrrie , they should work also using three different calculated fields, anyway, you could nest the conditions from the other calculated fields, even if the final caculated fied will be longer: | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")) AND (NOT error="invalid credentials")) OR (like('request.path',"auth/token/lookup-self") AND ('auth.display_name'="root")) ,"success") | eval app=case(action="success" OR action="failure", "appname_Authentication") | eval valid=if(error="invalid credentials","Error","Success") Ciao. Giuseppe
Hi @cshihua , you have to use a normal subsearch: [ | datamodel Tutorial Client_errors index | fields index] | ... Ciao. Giuseppe
Hi @silverKi , in my knowledge, it isn't possible! Ciao. Giuseppe
On your Splunk Search Head, you can find some examples for this.  Example Link - change your server name to  https://MY_SPLUNK_SERVER/en-GB/app/splunk-dashboard-studio/example-hub-security-summary-... See more...
On your Splunk Search Head, you can find some examples for this.  Example Link - change your server name to  https://MY_SPLUNK_SERVER/en-GB/app/splunk-dashboard-studio/example-hub-security-summary-dashboard Or you can go to search > dashboards>visit examples hub - there are plenty of examples there for you to check and see the json code.         
Hi @hohyuon , at first run the checks described by @deepakc that are the correct ones. Then, please, check the timestamp format: if the forma is dd/mm/yyyy you have to define this format in props.c... See more...
Hi @hohyuon , at first run the checks described by @deepakc that are the correct ones. Then, please, check the timestamp format: if the forma is dd/mm/yyyy you have to define this format in props.conf TIME_FORMAT = %d/%m/%Y %T:%M:$S because Splunk, by default, uses te american format (mm/dd/yyyy) and during the first 12 days of the month doesn't assign the correct timestamp, so today it isn't correct and you don't see events with today's date, but tomorrow date. Ciao. Giuseppe
I want to make a dashboard in dashboard studio which in middle there will be a world map and it will be surrounded by small panels. is it possible if yes then can you provide json code for that
You can begin by checking with these commands as it looks like a Windows UF #Shows monitored Files \opt\splunkforwarder\bin\splunk list monitor #Shows monitoried file inputs status \opt\splunkfor... See more...
You can begin by checking with these commands as it looks like a Windows UF #Shows monitored Files \opt\splunkforwarder\bin\splunk list monitor #Shows monitoried file inputs status \opt\splunkforwarder\bin\splunk list inputstatus Have you checked permissionss for the logs that are not being collected? Have you checked the name of the paths/logs are correct (typos)? Check splunkd.log - there may be some further info in there \opt\splunkforwarder\var\log\splunk\splunkd.log (Look for TailReader OR ERROR)
Richgalloway,   Thank you. will check the same
The datamodel don't have the src and dest ip address, so I want to use the indexes return from datamodel and perform further search in the main search. Do you mean you want to use additional dat... See more...
The datamodel don't have the src and dest ip address, so I want to use the indexes return from datamodel and perform further search in the main search. Do you mean you want to use additional data from that datamodel to enrich the main search?  In that case, subsearch is the wrong tool.  How you use datamodel will depend on what you want to do with this main search. (Here, let me lay out elements of an answerable question so you don't confuse volunteers in the future: Illustrate your dataset (or explain in detail), illustrate the desired output, explain the logic between illustrated data and desired output (without SPL).  If you do illustrate sample SPL, illustrate actual output, too, then explain how it differs from desired output if it is not painfully obvious.) Let me do a simple illustration.  If your main search without datamodel is index=myindex sourcetype=mytype abc=* | stats values(abc) as abc by def suppose it returns something like def abc def1 aaa bbb ccc def2 bbb ddd fff def3 aaa and if your datamodel search returns src_ip, dst_ip, and def, like this: def src_ip dst_ip def1 1.1.1.1 2.2.2.2 def2 1.2.1.1 2.1.2.1 def3 1.2.3.4 2.4.6.8 def4 4.3.2.1 8.6.4.2 You want the additional fields associated with def to be shown.  Then, you can do index=myindex sourcetype=mytype abc=* | append [ datamodel Tutorial Client_errors index] | stats values(abc) as abc values(src_ip) as src_ip values(dst_ip) as dst_ip by def This way, you get def abc src_ip dst_ip def1 aaa bbb ccc 1.1.1.1 2.2.2.2 def2 bbb ddd fff 1.2.1.1 2.1.2.1 def3 aaa 1.2.3.4 2.4.6.8 If your search and desired output are different, there are other ways to accomplish your goal but you have to be specific.
Collect two logs with the Universal Forwarder. One log is collected well, but one log is not collected. Can you give me some advice on this phenomenon? The input.conf configuration file. [monit... See more...
Collect two logs with the Universal Forwarder. One log is collected well, but one log is not collected. Can you give me some advice on this phenomenon? The input.conf configuration file. [monitor://D:\Log\State\...\*.Log] disabled = false index = cds_STW112 sourcetype = mujin_CDS_IPA_Log_State ignoreOlderThan = 1h >>>>Not collecting [monitor://D:\Log\Communication\DeviceNet\Input\...\*Input*.Log] disabled = false index = cds_STW112 sourcetype = mujin_CDS_DNetLog_IN ignoreOlderThan = 1h >>>>Collecting
Hi yuanliu & everyone, The datamodel don't have the src and dest ip address, so I want to use the indexes return from datamodel and perform further search in the main search.
I want to fill the table cells with tags. Like multi selector.. How can I make the table contents look like tags? How to do it without using html elements??