All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The extraction gives you the values for the fields for each event. Each event will have an _time field with the time of the event. You have all the information you need to plot the values against tim... See more...
The extraction gives you the values for the fields for each event. Each event will have an _time field with the time of the event. You have all the information you need to plot the values against time. Is it simply that you want to restrict the fields that are plotted? | table _time target state cavity The first field will be the x-axis on the chart (when you select a line chart as your visualisation), the other fields will be the series in the chart, each of which will be a line on the chart. What more do you need?
Hi Splunkers, We have requirement to monitor wineventlogswith sourcename MSSQL and will be sent to different sets of IDX. For global IDX,  the wineventlogs inputs will be sourcename MSSQL only For... See more...
Hi Splunkers, We have requirement to monitor wineventlogswith sourcename MSSQL and will be sent to different sets of IDX. For global IDX,  the wineventlogs inputs will be sourcename MSSQL only For abc-region, the wineventlogs inputs will be sourcename MSSQL and ComputerName with ending in "abc.com" domain (e.g. XXXXX.abc.com, YYYY.abc.com). With this, is the configurations below correct? Looking forward to your insights.   ########################################## inputs.conf ##################################### [WinEventLog://Application] index=mssql_idx whitelist= SourceName=%MSSQL% sourcetype=mssql:app disabled=false _TCP_ROUTING=idx-all-global crcSalt=<SOURCE> [WinEventLog://Application] index=mssql_idx whitelist= SourceName=%MSSQL% ComputerName=%abc.com% sourcetype=mssql:app disabled=false _TCP_ROUTING=idx-abc-region crcSalt=<SOURCE> ########################################## outputs.conf ########################################## [indexAndForward] index=false [tcpout] defaultGroup= idx-all-global, idx-abc-region [tcpout:idx-all-global] server=global-idx1:9997, global-idx2:9997 [tcpout:idx-abc-region] server= abc-region-idx1:9997, abc-region-idx2:9997  
There is probably something wrong with the way you have set up the search for the alert.
How can I split a single value into two separate values in a single value panel? Currently, my single value panel displays: Total :100 Error: 20 I would like to display: Total: 100 Error :20  ... See more...
How can I split a single value into two separate values in a single value panel? Currently, my single value panel displays: Total :100 Error: 20 I would like to display: Total: 100 Error :20   It works for a table, but not for a single value panel. How can I achieve this?
How to convert CSV lookup to DBXlookup? The lookup using CSV worked just fine. The CSV was moved to the database and when I converted lookup to DBXLookup, it didn't work. Please suggest. Thank... See more...
How to convert CSV lookup to DBXlookup? The lookup using CSV worked just fine. The CSV was moved to the database and when I converted lookup to DBXLookup, it didn't work. Please suggest. Thanks The following is only an example of a concept what I am trying to do, but it's not a real data. I don't know how to simulate index vs dbxquery on a test data. index=vuln_index | lookup host_ip.csv ip_address as ip OUTPUTNEW ip_address, hostname, os_type | dbxlookup connection="test" query="select * from host_ip"   ip_address as ip OUTPUTNEW ip_address, hostname, os_type Data CSV => DBX ip_address hostname ostype 192.168.1.1 host1 ostype1 192.168.1.2 host2 ostype2 192.168.1.3 host3 ostype3 192.168.1.4 host4 ostype4 index=vuln_index    ip vuln 192.168.1.1 vulnA 192.168.1.1 vulnB 192.168.1.2 vulnC 192.168.1.2 vulnD   Expected result ip_address hostname ostype vuln 192.168.1.1 host1 ostype1 vulnA 192.168.1.1 host1 ostype1 vulnB 192.168.1.2 host2 ostype2 vulnC 192.168.1.2 host2 ostype2 vulnD
You can do either of these first to turn it to a multiseries chart | eval namespace="" | xyseries namespace account_namespace count OR | transpose 0 header_field=account_namespace column_name=acco... See more...
You can do either of these first to turn it to a multiseries chart | eval namespace="" | xyseries namespace account_namespace count OR | transpose 0 header_field=account_namespace column_name=account_namespace | eval account_namespace=""
Something like index=your_index earliest=-1d@d latest=now | eval day=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval fieldcount = 0 | foreach * [ eval fieldcount=fieldcount+1 ] | ... See more...
Something like index=your_index earliest=-1d@d latest=now | eval day=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval fieldcount = 0 | foreach * [ eval fieldcount=fieldcount+1 ] | stats count max(fieldcount) as fieldcount by day will give you event count and field count per day, but not totally sure if the foreach will count correctly for fieldcount and it will very much depend on your data whether this is suitable or not. This assumes you ingest the data both yesterday and today. But there are many open area  - what's the relevance of field order - there's not concept of field order in Splunk - what if new rows are added or removed 'today', what do you want to see    
Hello everyone, I have a question for tags configuration in Eventgen. For a basic, the structure is: [your condition] yourtag1=enabled yourtag2=disabled For example: [action=created] change... See more...
Hello everyone, I have a question for tags configuration in Eventgen. For a basic, the structure is: [your condition] yourtag1=enabled yourtag2=disabled For example: [action=created] change=enabled So, the question is: If i want to tag an event with more than one condition, how can i do it? I tried "AND" "OR" operator but it does not work. And, "enabled" assigns the tag to an event, what does "disabled" do? Thank you for your reading  
Let's say I have a database that is pulled from an application on a daily basis into Splunk and accessed via DBXquery. Sometimes there are some changes in the data that might be caused by the system... See more...
Let's say I have a database that is pulled from an application on a daily basis into Splunk and accessed via DBXquery. Sometimes there are some changes in the data that might be caused by the system migration, including the number of fields, the number of rows, the order of the fields, etc. How do I validate the data before and after the migration to make sure there are no discrepancies? I am thinking of creating a query to display the fields and number of rows and compare them before and after. Please suggest. Thank you so much.
Hi, I wanted to have a bar graph that has different colour for better represention of my dashboard. I do have a search like below"   type="request" "request.path"="prod" | stats count by account_na... See more...
Hi, I wanted to have a bar graph that has different colour for better represention of my dashboard. I do have a search like below"   type="request" "request.path"="prod" | stats count by account_namespace | sort - count | head 10   I tried adding the "<option name="charting.seriesColors">[0x1e93c6, 0xf2b827, 0xd6563c, 0x6a5c9e, 0x31a35f, 0xed8440, 0x3863a0, 0xa2cc3e, 0xcc5068, 0x73427f]</option>" but I still get a single colour in my bar graph. I believe since i only one series for my query hence the single colour output.  Is there a way for me to have my bar graph contains multiple colour?
This is more of annoying log message issue. The log messages are intended to be suppressed and can be ignored unless it affects any Splunk performances in indexing or searching.  Fix versions, 9.1.3+... See more...
This is more of annoying log message issue. The log messages are intended to be suppressed and can be ignored unless it affects any Splunk performances in indexing or searching.  Fix versions, 9.1.3+, 9.2.0+
Here's one way of doing it    index=anIndex sourcetype=aSourceType aString earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 ... See more...
Here's one way of doing it    index=anIndex sourcetype=aSourceType aString earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), z=mvappend(z, zone) ] | stats count by z   what this is effectively doing is set up the base age, which are your right hand minute values. Then as each gap is 4 hours, (14400 seconds) use a foreach loop to go round each of the 8 age bands and see if the age is in that band. The output is a multivalue field that contains the bands it is found it. Then stats count by, will count for each band, so that should give you the counts in each of your bands. Does this give you what you want
So how are you expecting to correlate the 2 data sets? How do you find events with RELATED_VAL that are related to the row containing REFERENCE_VAL i.e. if the data is reference_val_1 related_val_... See more...
So how are you expecting to correlate the 2 data sets? How do you find events with RELATED_VAL that are related to the row containing REFERENCE_VAL i.e. if the data is reference_val_1 related_val_1 reference_val_2 related_val_2 related_val_3 reference_val_3 related_val_4 how do you expect to correlate related_val_3 with any of the 3 reference vals is it simply time proximity and if so, can you have interleaved reference_vals that may be in the same time window? Can you give an example of data - otherwise the requirements are too vague
I have made some progress and this is where I am at. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aWindow = case ( (_time > relative_time(now(),"-241m@m") AND (_tim... See more...
I have made some progress and this is where I am at. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aWindow = case ( (_time > relative_time(now(),"-241m@m") AND (_time < relative_time(now(),"-1m@m"))),1, (_time > relative_time(now(),"-246m@m") AND (_time < relative_time(now(),"-6m@m"))),2, (_time > relative_time(now(),"-251m@m") AND (_time < relative_time(now(),"-11m@m"))),3, (_time > relative_time(now(),"-271m@m") AND (_time < relative_time(now(),"-31m@m"))),4, (_time > relative_time(now(),"-301m@m") AND (_time < relative_time(now(),"-61m@m"))),5, (_time > relative_time(now(),"-331m@m") AND (_time < relative_time(now(),"-91m@m"))),6, (_time > relative_time(now(),"-361m@m") AND (_time < relative_time(now(),"-121m@m"))),7, (_time > relative_time(now(),"-481m@m") AND (_time < relative_time(now(),"-241m@m"))),8, true(),9) | stats count by aWindow but I have realized that using a case statement allows each log event to exist in one window, when the windows overlap and one log event can exist in more than one window ? I am working on a dashboard for 8 widgets that currently do the exact same query, just a different window. So I am trying to make one query that has all data for the calculation, then in the widget(s) previously mentioned use $query.1$ to retrieve the result from the base reusable query. So, how to I handle counting in these overlapping windows ?
One of my alerts is having an issue with the email link to the results not working. I get a 404 that says Oops. Page not found! I'm the admin, so I don't think it's a permissions issue. Other alerts... See more...
One of my alerts is having an issue with the email link to the results not working. I get a 404 that says Oops. Page not found! I'm the admin, so I don't think it's a permissions issue. Other alerts from the same app are working fine. Any ideas?
Basic search for doing this is index... | eval isInWindow = if (_time > relative_time(now(),"-241m@m") AND _time < relative_time(now(),"-1m@m"),1,0) | stats sum(isInWindow) as A which sets isInWind... See more...
Basic search for doing this is index... | eval isInWindow = if (_time > relative_time(now(),"-241m@m") AND _time < relative_time(now(),"-1m@m"),1,0) | stats sum(isInWindow) as A which sets isInWindow to be 1 or 0 depending on whether it's in or out then just summing the field. As for calculating sliding windows, streamstats is a way to do that, but you could also just do maths to set various counters using the same relative_time logic and then sum those counters. There are other ways, but it depends on what you want to do with that
I would consult the Nessus forums. 
I think what I am trying to do is relatively easy ? I want to query looking back -8 hours then count the # of events that are in a specific 4 hour window. index=anIndex sourcetype=aSourceType ... See more...
I think what I am trying to do is relatively easy ? I want to query looking back -8 hours then count the # of events that are in a specific 4 hour window. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aTime2 = _time | eval A = if (aTime2 > relative_time(now(),"-241m@m") AND aTime2 < relative_time(now(),"-1m@m"),(A+1),A) | table A, aTime2 I would also want a count for the next sliding 4 hr window (-300m to -60m), there are few more but just trying to figure out the first one for now. I was expecting my variable "A" to show how many of my matched events occur within the first 4 hr period but its empty ? Am I going about this incorrectly, not seeding "A" with a 0 start value ? What am I missing ?  
I had a quick question about the resources on my indexer. I have a dev environment with a forwarder, indexer, and SH. On all of the servers, I have an IO Wait error. Investigating, I could turn that ... See more...
I had a quick question about the resources on my indexer. I have a dev environment with a forwarder, indexer, and SH. On all of the servers, I have an IO Wait error. Investigating, I could turn that alert off, or I could look at the actual resources available on the machine. Looking through it, it looks as if i may need more resources. Looks like i only have 2 cores? and about7 GB of ram.    Min Specs recommended by Splunk are: An x86 64-bit chip architecture. 12 physical CPU cores, or 24 vCPU at 2 GHz or greater per core. 12 GB RAM. This is what i have: Would this explain these errors:   System iowait reached red threshold of 3 Maximum per-cpu iowait reached red threshold of 10 Sum of 3 highest per-cpu iowaits reached red threshold of 15   Before I started trying to re do our Dev env from the ground up, we were receiving these errors and they haven't gone away.    Thanks for any help
Hi All, I'm working on a project to create some dashboards that display a lot of information and one of the questions that I'm facing is how to know if Nessus scans are credentialed, I looked at som... See more...
Hi All, I'm working on a project to create some dashboards that display a lot of information and one of the questions that I'm facing is how to know if Nessus scans are credentialed, I looked at some events, and it indicates the check type: local. Is this means the scan is credential ?  Also tried to look into the events to see if there are anything that indicated that the scan is authenticated. Thanks in advance for any information may help.