All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Did the scheduled search run successfully? Can you rerun the search with the relevant timeframe and simple add the missing events to the summary index? Note, I did a BSides talk on Summary indexing... See more...
Did the scheduled search run successfully? Can you rerun the search with the relevant timeframe and simple add the missing events to the summary index? Note, I did a BSides talk on Summary indexing Idempotency which you might find useful - it is available on YouTube here
Assuming msgType extracts values such as SAP and TMD, try something like this | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP OR TMD |timechart count by msgType
Hi @frobinson_splun , I have the same requirement to provide  link to one of the tabs in a dashboard within an alert email. The tab name is dynamic. I noticed that the mentioned documentation links ... See more...
Hi @frobinson_splun , I have the same requirement to provide  link to one of the tabs in a dashboard within an alert email. The tab name is dynamic. I noticed that the mentioned documentation links are expired. Could you please provide me with the latest documentation links? Thanks in advance.
Hi,   I use collect for to create a summary about VPN login and logout events. This worked fine but on last week I have 24 hours of logout events missing. Meanwhile the summary of login events were... See more...
Hi,   I use collect for to create a summary about VPN login and logout events. This worked fine but on last week I have 24 hours of logout events missing. Meanwhile the summary of login events were created. I checked the search without the collect command and it gives the correct output. I tried it with a test index and it worked too. But when I run the search for the missing timeframe nothing appears in the destination index. Do you have any advice what else could I check?   Thanks
Hi  I am trying to get visualization in a way when i select multiple categories i see multiple lines in chart and when single value selected i need to get single line but unable to create chart whe... See more...
Hi  I am trying to get visualization in a way when i select multiple categories i see multiple lines in chart and when single value selected i need to get single line but unable to create chart when passing multiple values in categories Please help  | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP OR TMD |timechart count by SAP or TMD Expecting result like below but this query gives error | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP | timechart count . This is coming fine as I have timechart given by count Thanks in advance Ashima
HI I have a cluster(3 indexers) with data and I want to copy one index "logs_Test" data to a single install for testing. Can I copy it from the back end on all 3 and bring them together, I feel thi... See more...
HI I have a cluster(3 indexers) with data and I want to copy one index "logs_Test" data to a single install for testing. Can I copy it from the back end on all 3 and bring them together, I feel this won't work. Can I export it from the Search head to a new index and then move that? Any ideas would be great  Thanks in advance Robbie
Hi @ajitshukla61116  My requirement is similar to yours: I need to include a link to a dashboard within an alert email. Have you made any progress on this use case? If so, your insights would be rea... See more...
Hi @ajitshukla61116  My requirement is similar to yours: I need to include a link to a dashboard within an alert email. Have you made any progress on this use case? If so, your insights would be really helpful. Thanks in advance.  
It does. I understand high level what its doing but will need to walk through the specifics although It does get me where I needed to be. Here is what I ended up with: index=anIndex sourcetype=aSo... See more...
It does. I understand high level what its doing but will need to walk through the specifics although It does get me where I needed to be. Here is what I ended up with: index=anIndex sourcetype=aSourceType aString1 earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), aString1Count=mvappend(aString1Count, zone) ] | stats count by aString1Count | transpose 8 header_field=aString1Count | rename 0 AS "string1Window1", 1 AS "string1Window2", 2 AS "string1Window3", 3 AS "string1Window4", 4 AS "string1Window5", 5 AS "string1Window6", 6 AS "string1Window7", 7 AS "string1Window8" | appendcols [search index=anIndex sourcetype=aSourceType aString2 earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), aString2Count=mvappend(aString2Count, zone) ] | stats count by aString2Count | transpose 8 header_field=aString2Count | rename 0 AS "string2Window1", 1 AS "string2Window2", 2 AS "string2Window3", 3 AS "string2Window4", 4 AS "string2Window5", 5 AS "string2Window6", 6 AS "string2Window7", 7 AS "string2Window8" ] | table string1Window* string2Window* string1Window1 string1Window2 string1Window3 ... 44 42 40 ...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on loc... See more...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on local linux device. But when i change the configuration on cluster master to use this nas as cold data and push the configuration, splunk some time hang up and stops, even if i restart it it would not work. Has anyone tried nas as cold storage and can share his fstab and indexex.conf that would be great!
Hi Team, Full logs are not loading in Splunk for windows server.  Any suggestions to check.
Hello Splunk team, I was troubleshooting one query with anomalydetection command (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Anomalydetection), and one thing came to m... See more...
Hello Splunk team, I was troubleshooting one query with anomalydetection command (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Anomalydetection), and one thing came to my attention. While using action=filter I'm still seeing events with probable_cause_freq=1.0000 and log_event_prob=0.000 Should that actually happen? is log_event_prob=0.000 a threshold ? it's not an issue for me to filter it, i just wanted to double check if that is expected behaviour, as i couldn't find it in the documentation. Thanks!
The extraction gives you the values for the fields for each event. Each event will have an _time field with the time of the event. You have all the information you need to plot the values against tim... See more...
The extraction gives you the values for the fields for each event. Each event will have an _time field with the time of the event. You have all the information you need to plot the values against time. Is it simply that you want to restrict the fields that are plotted? | table _time target state cavity The first field will be the x-axis on the chart (when you select a line chart as your visualisation), the other fields will be the series in the chart, each of which will be a line on the chart. What more do you need?
Hi Splunkers, We have requirement to monitor wineventlogswith sourcename MSSQL and will be sent to different sets of IDX. For global IDX,  the wineventlogs inputs will be sourcename MSSQL only For... See more...
Hi Splunkers, We have requirement to monitor wineventlogswith sourcename MSSQL and will be sent to different sets of IDX. For global IDX,  the wineventlogs inputs will be sourcename MSSQL only For abc-region, the wineventlogs inputs will be sourcename MSSQL and ComputerName with ending in "abc.com" domain (e.g. XXXXX.abc.com, YYYY.abc.com). With this, is the configurations below correct? Looking forward to your insights.   ########################################## inputs.conf ##################################### [WinEventLog://Application] index=mssql_idx whitelist= SourceName=%MSSQL% sourcetype=mssql:app disabled=false _TCP_ROUTING=idx-all-global crcSalt=<SOURCE> [WinEventLog://Application] index=mssql_idx whitelist= SourceName=%MSSQL% ComputerName=%abc.com% sourcetype=mssql:app disabled=false _TCP_ROUTING=idx-abc-region crcSalt=<SOURCE> ########################################## outputs.conf ########################################## [indexAndForward] index=false [tcpout] defaultGroup= idx-all-global, idx-abc-region [tcpout:idx-all-global] server=global-idx1:9997, global-idx2:9997 [tcpout:idx-abc-region] server= abc-region-idx1:9997, abc-region-idx2:9997  
There is probably something wrong with the way you have set up the search for the alert.
How can I split a single value into two separate values in a single value panel? Currently, my single value panel displays: Total :100 Error: 20 I would like to display: Total: 100 Error :20  ... See more...
How can I split a single value into two separate values in a single value panel? Currently, my single value panel displays: Total :100 Error: 20 I would like to display: Total: 100 Error :20   It works for a table, but not for a single value panel. How can I achieve this?
How to convert CSV lookup to DBXlookup? The lookup using CSV worked just fine. The CSV was moved to the database and when I converted lookup to DBXLookup, it didn't work. Please suggest. Thank... See more...
How to convert CSV lookup to DBXlookup? The lookup using CSV worked just fine. The CSV was moved to the database and when I converted lookup to DBXLookup, it didn't work. Please suggest. Thanks The following is only an example of a concept what I am trying to do, but it's not a real data. I don't know how to simulate index vs dbxquery on a test data. index=vuln_index | lookup host_ip.csv ip_address as ip OUTPUTNEW ip_address, hostname, os_type | dbxlookup connection="test" query="select * from host_ip"   ip_address as ip OUTPUTNEW ip_address, hostname, os_type Data CSV => DBX ip_address hostname ostype 192.168.1.1 host1 ostype1 192.168.1.2 host2 ostype2 192.168.1.3 host3 ostype3 192.168.1.4 host4 ostype4 index=vuln_index    ip vuln 192.168.1.1 vulnA 192.168.1.1 vulnB 192.168.1.2 vulnC 192.168.1.2 vulnD   Expected result ip_address hostname ostype vuln 192.168.1.1 host1 ostype1 vulnA 192.168.1.1 host1 ostype1 vulnB 192.168.1.2 host2 ostype2 vulnC 192.168.1.2 host2 ostype2 vulnD
You can do either of these first to turn it to a multiseries chart | eval namespace="" | xyseries namespace account_namespace count OR | transpose 0 header_field=account_namespace column_name=acco... See more...
You can do either of these first to turn it to a multiseries chart | eval namespace="" | xyseries namespace account_namespace count OR | transpose 0 header_field=account_namespace column_name=account_namespace | eval account_namespace=""
Something like index=your_index earliest=-1d@d latest=now | eval day=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval fieldcount = 0 | foreach * [ eval fieldcount=fieldcount+1 ] | ... See more...
Something like index=your_index earliest=-1d@d latest=now | eval day=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval fieldcount = 0 | foreach * [ eval fieldcount=fieldcount+1 ] | stats count max(fieldcount) as fieldcount by day will give you event count and field count per day, but not totally sure if the foreach will count correctly for fieldcount and it will very much depend on your data whether this is suitable or not. This assumes you ingest the data both yesterday and today. But there are many open area  - what's the relevance of field order - there's not concept of field order in Splunk - what if new rows are added or removed 'today', what do you want to see    
Hello everyone, I have a question for tags configuration in Eventgen. For a basic, the structure is: [your condition] yourtag1=enabled yourtag2=disabled For example: [action=created] change... See more...
Hello everyone, I have a question for tags configuration in Eventgen. For a basic, the structure is: [your condition] yourtag1=enabled yourtag2=disabled For example: [action=created] change=enabled So, the question is: If i want to tag an event with more than one condition, how can i do it? I tried "AND" "OR" operator but it does not work. And, "enabled" assigns the tag to an event, what does "disabled" do? Thank you for your reading