All Posts

Top

All Posts

On your Splunk Search Head, you can find some examples for this.  Example Link - change your server name to  https://MY_SPLUNK_SERVER/en-GB/app/splunk-dashboard-studio/example-hub-security-summary-... See more...
On your Splunk Search Head, you can find some examples for this.  Example Link - change your server name to  https://MY_SPLUNK_SERVER/en-GB/app/splunk-dashboard-studio/example-hub-security-summary-dashboard Or you can go to search > dashboards>visit examples hub - there are plenty of examples there for you to check and see the json code.         
Hi @hohyuon , at first run the checks described by @deepakc that are the correct ones. Then, please, check the timestamp format: if the forma is dd/mm/yyyy you have to define this format in props.c... See more...
Hi @hohyuon , at first run the checks described by @deepakc that are the correct ones. Then, please, check the timestamp format: if the forma is dd/mm/yyyy you have to define this format in props.conf TIME_FORMAT = %d/%m/%Y %T:%M:$S because Splunk, by default, uses te american format (mm/dd/yyyy) and during the first 12 days of the month doesn't assign the correct timestamp, so today it isn't correct and you don't see events with today's date, but tomorrow date. Ciao. Giuseppe
I want to make a dashboard in dashboard studio which in middle there will be a world map and it will be surrounded by small panels. is it possible if yes then can you provide json code for that
You can begin by checking with these commands as it looks like a Windows UF #Shows monitored Files \opt\splunkforwarder\bin\splunk list monitor #Shows monitoried file inputs status \opt\splunkfor... See more...
You can begin by checking with these commands as it looks like a Windows UF #Shows monitored Files \opt\splunkforwarder\bin\splunk list monitor #Shows monitoried file inputs status \opt\splunkforwarder\bin\splunk list inputstatus Have you checked permissionss for the logs that are not being collected? Have you checked the name of the paths/logs are correct (typos)? Check splunkd.log - there may be some further info in there \opt\splunkforwarder\var\log\splunk\splunkd.log (Look for TailReader OR ERROR)
Richgalloway,   Thank you. will check the same
The datamodel don't have the src and dest ip address, so I want to use the indexes return from datamodel and perform further search in the main search. Do you mean you want to use additional dat... See more...
The datamodel don't have the src and dest ip address, so I want to use the indexes return from datamodel and perform further search in the main search. Do you mean you want to use additional data from that datamodel to enrich the main search?  In that case, subsearch is the wrong tool.  How you use datamodel will depend on what you want to do with this main search. (Here, let me lay out elements of an answerable question so you don't confuse volunteers in the future: Illustrate your dataset (or explain in detail), illustrate the desired output, explain the logic between illustrated data and desired output (without SPL).  If you do illustrate sample SPL, illustrate actual output, too, then explain how it differs from desired output if it is not painfully obvious.) Let me do a simple illustration.  If your main search without datamodel is index=myindex sourcetype=mytype abc=* | stats values(abc) as abc by def suppose it returns something like def abc def1 aaa bbb ccc def2 bbb ddd fff def3 aaa and if your datamodel search returns src_ip, dst_ip, and def, like this: def src_ip dst_ip def1 1.1.1.1 2.2.2.2 def2 1.2.1.1 2.1.2.1 def3 1.2.3.4 2.4.6.8 def4 4.3.2.1 8.6.4.2 You want the additional fields associated with def to be shown.  Then, you can do index=myindex sourcetype=mytype abc=* | append [ datamodel Tutorial Client_errors index] | stats values(abc) as abc values(src_ip) as src_ip values(dst_ip) as dst_ip by def This way, you get def abc src_ip dst_ip def1 aaa bbb ccc 1.1.1.1 2.2.2.2 def2 bbb ddd fff 1.2.1.1 2.1.2.1 def3 aaa 1.2.3.4 2.4.6.8 If your search and desired output are different, there are other ways to accomplish your goal but you have to be specific.
Collect two logs with the Universal Forwarder. One log is collected well, but one log is not collected. Can you give me some advice on this phenomenon? The input.conf configuration file. [monit... See more...
Collect two logs with the Universal Forwarder. One log is collected well, but one log is not collected. Can you give me some advice on this phenomenon? The input.conf configuration file. [monitor://D:\Log\State\...\*.Log] disabled = false index = cds_STW112 sourcetype = mujin_CDS_IPA_Log_State ignoreOlderThan = 1h >>>>Not collecting [monitor://D:\Log\Communication\DeviceNet\Input\...\*Input*.Log] disabled = false index = cds_STW112 sourcetype = mujin_CDS_DNetLog_IN ignoreOlderThan = 1h >>>>Collecting
Hi yuanliu & everyone, The datamodel don't have the src and dest ip address, so I want to use the indexes return from datamodel and perform further search in the main search.
I want to fill the table cells with tags. Like multi selector.. How can I make the table contents look like tags? How to do it without using html elements?? 
You need to clarify what does "to get the index"..  Do you mean to restrict the main search's index to values of index in the subsearch?  If that, all you need to do is [|datamodel Tutorial Client_e... See more...
You need to clarify what does "to get the index"..  Do you mean to restrict the main search's index to values of index in the subsearch?  If that, all you need to do is [|datamodel Tutorial Client_errors index | stats values(index) as index] <rest of your filters>
Hello Everyone, I would want to ask a question, is there any way for main search get the index return from subsearch?  Since the subsearch will be execute first. The results in the datamodel may ret... See more...
Hello Everyone, I would want to ask a question, is there any way for main search get the index return from subsearch?  Since the subsearch will be execute first. The results in the datamodel may return different indexes some of my example: index=[|datamodel Tutorial Client_errors index | return index]
Hi Team, I need to create 3 calculated fields | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")... See more...
Hi Team, I need to create 3 calculated fields | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")) AND (valid="Success")) OR (like('request.path',"auth/token/lookup-self") AND ('auth.display_name'="root")) ,"success") | eval app= case(action="success" OR action="failure", "appname_Authentication") | eval valid= if(error="invalid credentials","Error","Success") action field is dependant on valid app field is dependant on action I am unable to see app field in the splunk, may I know how to create it?
Hi, Is using status.hostIP not working for some reason?  I haven't tried it, but you might be able to just use spec.nodeName instead?
Hey, I had discovered you can emulate the mvexpand function to avoid its limitation configured by the limits.conf  You just have to stats by the multivalue field you were trying to mvexpand, like s... See more...
Hey, I had discovered you can emulate the mvexpand function to avoid its limitation configured by the limits.conf  You just have to stats by the multivalue field you were trying to mvexpand, like so:     | stats values(*) AS * by <multivalue_field>     That's it, (edit:) assuming each value is a unique value such as a unique identifier. You can make values unique using methods like foreach to pre-append a row-based number to each value, reverse join it, then use split and mvindex to remove the row numbers afterwards. (/Edit.) Stats splits up <multivalue_field> into its individual rows, and the use of values(*) copies data across all rows. As an added measure, you can make sure to avoid unnecessary _raw data to reduce memory use with an explicit fields just for it. It was in my experience, it turned out using | fields _time, * trick does not actually remove every single Splunk internal fields. Removing _raw had to be explicit.     | fields _time, xxx, yyy, zzz, <multivalue_field> | fields - _raw | stats values(*) AS * by <multivalue_field>     The above strategy minimizes your search's disk space as much as possible before expanding the multivalue field.
collect can collect events in the future, the issue is how the collect command handles _time. It will NOT use the _time field as _time. It has different behaviour depending on whether it's run as a ... See more...
collect can collect events in the future, the issue is how the collect command handles _time. It will NOT use the _time field as _time. It has different behaviour depending on whether it's run as a scheduled saved search or an add hoc search. The docs on collect are really bad and buggy. Using addtime is also problematic. We use this process via a macro when using collect and you need specific control over _time | eval _raw=printf("_time=%d", _time) | foreach "*" [| eval _raw=_raw.case(isnull('<<FIELD>>'),"", ``` Ignore null fields ``` mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", ``` Handle MV fields just in case ``` ``` Concatenate the field with a quoted value and remove the original field ``` !isnum('<<FIELD>>') AND match('<<FIELD>>', "[\[\]<>\(\){\}\|\!\;\,\'\"\*\n\r\s\t\&\?\+]"),", <<FIELD>>=\"".replace('<<FIELD>>',"\"","\\\"")."\"", ``` if no breakers, then dont quote the field ``` true(), ", <<FIELD>>=".'<<FIELD>>') | fields - "<<FIELD>>" ] | fields _raw | collect index=bla addtime=f testmode=f It will ignore null fields, it will write unquoted fields when they do not contain major breakers (this allows for more performant searching using TERM() search techniques) and it will join multivalue fields together with ###  You can also use this similar but slightly different approach | foreach * [ eval _raw = if(isnull('<<FIELD>>'), _raw, json_set(coalesce(_raw, json_object()), "<<FIELD>>",'<<FIELD>>'))] | table _time _raw or you can use output_mode=hec, which I believe will get time correct.  
What is the output - do you want just 3 numbers, 30, 90 and 1y average mttr values or are you looking for a timechart which shows 3 lines with the 30 and 90 day rolling averages? Averages are easy t... See more...
What is the output - do you want just 3 numbers, 30, 90 and 1y average mttr values or are you looking for a timechart which shows 3 lines with the 30 and 90 day rolling averages? Averages are easy to calculate over multiple time windows as you can just collect counts and totals, so here's an example of doing the 30 day average and 90 day rolling average | makeresults count=730 | streamstats c | eval _time=now() - (86400 * (floor(c/2))) | eval mttr=random() % 100 | bin _time span=30d | stats count sum(mttr) as sum_mttr avg(mttr) as mttr_avg_30 by _time | streamstats window=3 sum(count) as count_90 sum(sum_mttr) as sum_90 | eval rolling_avg_90 = sum_90 / count_90 | eventstats sum(sum_mttr) as total_mttr sum(count) as total_count | eval annual_avg = total_mttr / total_count | fields - count_90 sum_90 count sum_mttr total_* this example generates 2 events per day over a year and takes the 30 day average as well as the count and sum of mttr, so it then uses streamstats to calculate the 90 day rolling average and then finally eventstats to calculate the annual average.  
There is no _time field after a tstats, so you either have to split by _time or add something like   | tstats max(_time) as _time...   but it depends on what you're trying to achieve as to what y... See more...
There is no _time field after a tstats, so you either have to split by _time or add something like   | tstats max(_time) as _time...   but it depends on what you're trying to achieve as to what you need to do You can also use | tstats latest_time(var1) as _time... which will give you the latest _time the var1 variable was seen
This evening decided to setup a test Splunk box in my lab to goof around with.  Been a while since I have done this part of the process. (Work cluster is up and going and has been for years now).  As... See more...
This evening decided to setup a test Splunk box in my lab to goof around with.  Been a while since I have done this part of the process. (Work cluster is up and going and has been for years now).  As I was looking at my local test box, I noticed the hard drive was not likely the best size to use. So since I have a syslog server running on this as well, and I am pulling those files into Splunk (Splunk will not always be running, hence not sending data direct to Splunk), wanted to try doing a line level destructive read. I did see where others were using a monitor and deleting a file on ingestion, but did not see if line level was being done. So, question is, has anyone done that, and if so, do you have some hints or pointers? Thanks
Hi @OriP  Pls try something similar from this post -  https://community.splunk.com/t5/Splunk-Search/How-do-you-use-the-streamstats-command-after-tstats-and-stats/m-p/388189  
If the app is installed on a heavy forwarder then all parsing is done there using the configurations in the app.  There is little need for the app also to be on the indexers unless you like to wear s... See more...
If the app is installed on a heavy forwarder then all parsing is done there using the configurations in the app.  There is little need for the app also to be on the indexers unless you like to wear suspenders (braces) with your belt. P.S.  I challenge the notion that almost every app uses script inputs.  Of the thousands of app in splunkbase, comparatively few use input scripts.