All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It does. I understand high level what its doing but will need to walk through the specifics although It does get me where I needed to be. Here is what I ended up with: index=anIndex sourcetype=aSo... See more...
It does. I understand high level what its doing but will need to walk through the specifics although It does get me where I needed to be. Here is what I ended up with: index=anIndex sourcetype=aSourceType aString1 earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), aString1Count=mvappend(aString1Count, zone) ] | stats count by aString1Count | transpose 8 header_field=aString1Count | rename 0 AS "string1Window1", 1 AS "string1Window2", 2 AS "string1Window3", 3 AS "string1Window4", 4 AS "string1Window5", 5 AS "string1Window6", 6 AS "string1Window7", 7 AS "string1Window8" | appendcols [search index=anIndex sourcetype=aSourceType aString2 earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), aString2Count=mvappend(aString2Count, zone) ] | stats count by aString2Count | transpose 8 header_field=aString2Count | rename 0 AS "string2Window1", 1 AS "string2Window2", 2 AS "string2Window3", 3 AS "string2Window4", 4 AS "string2Window5", 5 AS "string2Window6", 6 AS "string2Window7", 7 AS "string2Window8" ] | table string1Window* string2Window* string1Window1 string1Window2 string1Window3 ... 44 42 40 ...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on loc... See more...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on local linux device. But when i change the configuration on cluster master to use this nas as cold data and push the configuration, splunk some time hang up and stops, even if i restart it it would not work. Has anyone tried nas as cold storage and can share his fstab and indexex.conf that would be great!
Hi Team, Full logs are not loading in Splunk for windows server.  Any suggestions to check.
Hello Splunk team, I was troubleshooting one query with anomalydetection command (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Anomalydetection), and one thing came to m... See more...
Hello Splunk team, I was troubleshooting one query with anomalydetection command (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Anomalydetection), and one thing came to my attention. While using action=filter I'm still seeing events with probable_cause_freq=1.0000 and log_event_prob=0.000 Should that actually happen? is log_event_prob=0.000 a threshold ? it's not an issue for me to filter it, i just wanted to double check if that is expected behaviour, as i couldn't find it in the documentation. Thanks!
The extraction gives you the values for the fields for each event. Each event will have an _time field with the time of the event. You have all the information you need to plot the values against tim... See more...
The extraction gives you the values for the fields for each event. Each event will have an _time field with the time of the event. You have all the information you need to plot the values against time. Is it simply that you want to restrict the fields that are plotted? | table _time target state cavity The first field will be the x-axis on the chart (when you select a line chart as your visualisation), the other fields will be the series in the chart, each of which will be a line on the chart. What more do you need?
Hi Splunkers, We have requirement to monitor wineventlogswith sourcename MSSQL and will be sent to different sets of IDX. For global IDX,  the wineventlogs inputs will be sourcename MSSQL only For... See more...
Hi Splunkers, We have requirement to monitor wineventlogswith sourcename MSSQL and will be sent to different sets of IDX. For global IDX,  the wineventlogs inputs will be sourcename MSSQL only For abc-region, the wineventlogs inputs will be sourcename MSSQL and ComputerName with ending in "abc.com" domain (e.g. XXXXX.abc.com, YYYY.abc.com). With this, is the configurations below correct? Looking forward to your insights.   ########################################## inputs.conf ##################################### [WinEventLog://Application] index=mssql_idx whitelist= SourceName=%MSSQL% sourcetype=mssql:app disabled=false _TCP_ROUTING=idx-all-global crcSalt=<SOURCE> [WinEventLog://Application] index=mssql_idx whitelist= SourceName=%MSSQL% ComputerName=%abc.com% sourcetype=mssql:app disabled=false _TCP_ROUTING=idx-abc-region crcSalt=<SOURCE> ########################################## outputs.conf ########################################## [indexAndForward] index=false [tcpout] defaultGroup= idx-all-global, idx-abc-region [tcpout:idx-all-global] server=global-idx1:9997, global-idx2:9997 [tcpout:idx-abc-region] server= abc-region-idx1:9997, abc-region-idx2:9997  
There is probably something wrong with the way you have set up the search for the alert.
How can I split a single value into two separate values in a single value panel? Currently, my single value panel displays: Total :100 Error: 20 I would like to display: Total: 100 Error :20  ... See more...
How can I split a single value into two separate values in a single value panel? Currently, my single value panel displays: Total :100 Error: 20 I would like to display: Total: 100 Error :20   It works for a table, but not for a single value panel. How can I achieve this?
How to convert CSV lookup to DBXlookup? The lookup using CSV worked just fine. The CSV was moved to the database and when I converted lookup to DBXLookup, it didn't work. Please suggest. Thank... See more...
How to convert CSV lookup to DBXlookup? The lookup using CSV worked just fine. The CSV was moved to the database and when I converted lookup to DBXLookup, it didn't work. Please suggest. Thanks The following is only an example of a concept what I am trying to do, but it's not a real data. I don't know how to simulate index vs dbxquery on a test data. index=vuln_index | lookup host_ip.csv ip_address as ip OUTPUTNEW ip_address, hostname, os_type | dbxlookup connection="test" query="select * from host_ip"   ip_address as ip OUTPUTNEW ip_address, hostname, os_type Data CSV => DBX ip_address hostname ostype 192.168.1.1 host1 ostype1 192.168.1.2 host2 ostype2 192.168.1.3 host3 ostype3 192.168.1.4 host4 ostype4 index=vuln_index    ip vuln 192.168.1.1 vulnA 192.168.1.1 vulnB 192.168.1.2 vulnC 192.168.1.2 vulnD   Expected result ip_address hostname ostype vuln 192.168.1.1 host1 ostype1 vulnA 192.168.1.1 host1 ostype1 vulnB 192.168.1.2 host2 ostype2 vulnC 192.168.1.2 host2 ostype2 vulnD
You can do either of these first to turn it to a multiseries chart | eval namespace="" | xyseries namespace account_namespace count OR | transpose 0 header_field=account_namespace column_name=acco... See more...
You can do either of these first to turn it to a multiseries chart | eval namespace="" | xyseries namespace account_namespace count OR | transpose 0 header_field=account_namespace column_name=account_namespace | eval account_namespace=""
Something like index=your_index earliest=-1d@d latest=now | eval day=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval fieldcount = 0 | foreach * [ eval fieldcount=fieldcount+1 ] | ... See more...
Something like index=your_index earliest=-1d@d latest=now | eval day=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval fieldcount = 0 | foreach * [ eval fieldcount=fieldcount+1 ] | stats count max(fieldcount) as fieldcount by day will give you event count and field count per day, but not totally sure if the foreach will count correctly for fieldcount and it will very much depend on your data whether this is suitable or not. This assumes you ingest the data both yesterday and today. But there are many open area  - what's the relevance of field order - there's not concept of field order in Splunk - what if new rows are added or removed 'today', what do you want to see    
Hello everyone, I have a question for tags configuration in Eventgen. For a basic, the structure is: [your condition] yourtag1=enabled yourtag2=disabled For example: [action=created] change... See more...
Hello everyone, I have a question for tags configuration in Eventgen. For a basic, the structure is: [your condition] yourtag1=enabled yourtag2=disabled For example: [action=created] change=enabled So, the question is: If i want to tag an event with more than one condition, how can i do it? I tried "AND" "OR" operator but it does not work. And, "enabled" assigns the tag to an event, what does "disabled" do? Thank you for your reading  
Let's say I have a database that is pulled from an application on a daily basis into Splunk and accessed via DBXquery. Sometimes there are some changes in the data that might be caused by the system... See more...
Let's say I have a database that is pulled from an application on a daily basis into Splunk and accessed via DBXquery. Sometimes there are some changes in the data that might be caused by the system migration, including the number of fields, the number of rows, the order of the fields, etc. How do I validate the data before and after the migration to make sure there are no discrepancies? I am thinking of creating a query to display the fields and number of rows and compare them before and after. Please suggest. Thank you so much.
Hi, I wanted to have a bar graph that has different colour for better represention of my dashboard. I do have a search like below"   type="request" "request.path"="prod" | stats count by account_na... See more...
Hi, I wanted to have a bar graph that has different colour for better represention of my dashboard. I do have a search like below"   type="request" "request.path"="prod" | stats count by account_namespace | sort - count | head 10   I tried adding the "<option name="charting.seriesColors">[0x1e93c6, 0xf2b827, 0xd6563c, 0x6a5c9e, 0x31a35f, 0xed8440, 0x3863a0, 0xa2cc3e, 0xcc5068, 0x73427f]</option>" but I still get a single colour in my bar graph. I believe since i only one series for my query hence the single colour output.  Is there a way for me to have my bar graph contains multiple colour?
This is more of annoying log message issue. The log messages are intended to be suppressed and can be ignored unless it affects any Splunk performances in indexing or searching.  Fix versions, 9.1.3+... See more...
This is more of annoying log message issue. The log messages are intended to be suppressed and can be ignored unless it affects any Splunk performances in indexing or searching.  Fix versions, 9.1.3+, 9.2.0+
Here's one way of doing it    index=anIndex sourcetype=aSourceType aString earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 ... See more...
Here's one way of doing it    index=anIndex sourcetype=aSourceType aString earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), z=mvappend(z, zone) ] | stats count by z   what this is effectively doing is set up the base age, which are your right hand minute values. Then as each gap is 4 hours, (14400 seconds) use a foreach loop to go round each of the 8 age bands and see if the age is in that band. The output is a multivalue field that contains the bands it is found it. Then stats count by, will count for each band, so that should give you the counts in each of your bands. Does this give you what you want
So how are you expecting to correlate the 2 data sets? How do you find events with RELATED_VAL that are related to the row containing REFERENCE_VAL i.e. if the data is reference_val_1 related_val_... See more...
So how are you expecting to correlate the 2 data sets? How do you find events with RELATED_VAL that are related to the row containing REFERENCE_VAL i.e. if the data is reference_val_1 related_val_1 reference_val_2 related_val_2 related_val_3 reference_val_3 related_val_4 how do you expect to correlate related_val_3 with any of the 3 reference vals is it simply time proximity and if so, can you have interleaved reference_vals that may be in the same time window? Can you give an example of data - otherwise the requirements are too vague
I have made some progress and this is where I am at. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aWindow = case ( (_time > relative_time(now(),"-241m@m") AND (_tim... See more...
I have made some progress and this is where I am at. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aWindow = case ( (_time > relative_time(now(),"-241m@m") AND (_time < relative_time(now(),"-1m@m"))),1, (_time > relative_time(now(),"-246m@m") AND (_time < relative_time(now(),"-6m@m"))),2, (_time > relative_time(now(),"-251m@m") AND (_time < relative_time(now(),"-11m@m"))),3, (_time > relative_time(now(),"-271m@m") AND (_time < relative_time(now(),"-31m@m"))),4, (_time > relative_time(now(),"-301m@m") AND (_time < relative_time(now(),"-61m@m"))),5, (_time > relative_time(now(),"-331m@m") AND (_time < relative_time(now(),"-91m@m"))),6, (_time > relative_time(now(),"-361m@m") AND (_time < relative_time(now(),"-121m@m"))),7, (_time > relative_time(now(),"-481m@m") AND (_time < relative_time(now(),"-241m@m"))),8, true(),9) | stats count by aWindow but I have realized that using a case statement allows each log event to exist in one window, when the windows overlap and one log event can exist in more than one window ? I am working on a dashboard for 8 widgets that currently do the exact same query, just a different window. So I am trying to make one query that has all data for the calculation, then in the widget(s) previously mentioned use $query.1$ to retrieve the result from the base reusable query. So, how to I handle counting in these overlapping windows ?
One of my alerts is having an issue with the email link to the results not working. I get a 404 that says Oops. Page not found! I'm the admin, so I don't think it's a permissions issue. Other alerts... See more...
One of my alerts is having an issue with the email link to the results not working. I get a 404 that says Oops. Page not found! I'm the admin, so I don't think it's a permissions issue. Other alerts from the same app are working fine. Any ideas?