All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Let's say I have a database that is pulled from an application on a daily basis into Splunk and accessed via DBXquery. Sometimes there are some changes in the data that might be caused by the system... See more...
Let's say I have a database that is pulled from an application on a daily basis into Splunk and accessed via DBXquery. Sometimes there are some changes in the data that might be caused by the system migration, including the number of fields, the number of rows, the order of the fields, etc. How do I validate the data before and after the migration to make sure there are no discrepancies? I am thinking of creating a query to display the fields and number of rows and compare them before and after. Please suggest. Thank you so much.
Hi, I wanted to have a bar graph that has different colour for better represention of my dashboard. I do have a search like below"   type="request" "request.path"="prod" | stats count by account_na... See more...
Hi, I wanted to have a bar graph that has different colour for better represention of my dashboard. I do have a search like below"   type="request" "request.path"="prod" | stats count by account_namespace | sort - count | head 10   I tried adding the "<option name="charting.seriesColors">[0x1e93c6, 0xf2b827, 0xd6563c, 0x6a5c9e, 0x31a35f, 0xed8440, 0x3863a0, 0xa2cc3e, 0xcc5068, 0x73427f]</option>" but I still get a single colour in my bar graph. I believe since i only one series for my query hence the single colour output.  Is there a way for me to have my bar graph contains multiple colour?
This is more of annoying log message issue. The log messages are intended to be suppressed and can be ignored unless it affects any Splunk performances in indexing or searching.  Fix versions, 9.1.3+... See more...
This is more of annoying log message issue. The log messages are intended to be suppressed and can be ignored unless it affects any Splunk performances in indexing or searching.  Fix versions, 9.1.3+, 9.2.0+
Here's one way of doing it    index=anIndex sourcetype=aSourceType aString earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 ... See more...
Here's one way of doing it    index=anIndex sourcetype=aSourceType aString earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), z=mvappend(z, zone) ] | stats count by z   what this is effectively doing is set up the base age, which are your right hand minute values. Then as each gap is 4 hours, (14400 seconds) use a foreach loop to go round each of the 8 age bands and see if the age is in that band. The output is a multivalue field that contains the bands it is found it. Then stats count by, will count for each band, so that should give you the counts in each of your bands. Does this give you what you want
So how are you expecting to correlate the 2 data sets? How do you find events with RELATED_VAL that are related to the row containing REFERENCE_VAL i.e. if the data is reference_val_1 related_val_... See more...
So how are you expecting to correlate the 2 data sets? How do you find events with RELATED_VAL that are related to the row containing REFERENCE_VAL i.e. if the data is reference_val_1 related_val_1 reference_val_2 related_val_2 related_val_3 reference_val_3 related_val_4 how do you expect to correlate related_val_3 with any of the 3 reference vals is it simply time proximity and if so, can you have interleaved reference_vals that may be in the same time window? Can you give an example of data - otherwise the requirements are too vague
I have made some progress and this is where I am at. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aWindow = case ( (_time > relative_time(now(),"-241m@m") AND (_tim... See more...
I have made some progress and this is where I am at. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aWindow = case ( (_time > relative_time(now(),"-241m@m") AND (_time < relative_time(now(),"-1m@m"))),1, (_time > relative_time(now(),"-246m@m") AND (_time < relative_time(now(),"-6m@m"))),2, (_time > relative_time(now(),"-251m@m") AND (_time < relative_time(now(),"-11m@m"))),3, (_time > relative_time(now(),"-271m@m") AND (_time < relative_time(now(),"-31m@m"))),4, (_time > relative_time(now(),"-301m@m") AND (_time < relative_time(now(),"-61m@m"))),5, (_time > relative_time(now(),"-331m@m") AND (_time < relative_time(now(),"-91m@m"))),6, (_time > relative_time(now(),"-361m@m") AND (_time < relative_time(now(),"-121m@m"))),7, (_time > relative_time(now(),"-481m@m") AND (_time < relative_time(now(),"-241m@m"))),8, true(),9) | stats count by aWindow but I have realized that using a case statement allows each log event to exist in one window, when the windows overlap and one log event can exist in more than one window ? I am working on a dashboard for 8 widgets that currently do the exact same query, just a different window. So I am trying to make one query that has all data for the calculation, then in the widget(s) previously mentioned use $query.1$ to retrieve the result from the base reusable query. So, how to I handle counting in these overlapping windows ?
One of my alerts is having an issue with the email link to the results not working. I get a 404 that says Oops. Page not found! I'm the admin, so I don't think it's a permissions issue. Other alerts... See more...
One of my alerts is having an issue with the email link to the results not working. I get a 404 that says Oops. Page not found! I'm the admin, so I don't think it's a permissions issue. Other alerts from the same app are working fine. Any ideas?
Basic search for doing this is index... | eval isInWindow = if (_time > relative_time(now(),"-241m@m") AND _time < relative_time(now(),"-1m@m"),1,0) | stats sum(isInWindow) as A which sets isInWind... See more...
Basic search for doing this is index... | eval isInWindow = if (_time > relative_time(now(),"-241m@m") AND _time < relative_time(now(),"-1m@m"),1,0) | stats sum(isInWindow) as A which sets isInWindow to be 1 or 0 depending on whether it's in or out then just summing the field. As for calculating sliding windows, streamstats is a way to do that, but you could also just do maths to set various counters using the same relative_time logic and then sum those counters. There are other ways, but it depends on what you want to do with that
I would consult the Nessus forums. 
I think what I am trying to do is relatively easy ? I want to query looking back -8 hours then count the # of events that are in a specific 4 hour window. index=anIndex sourcetype=aSourceType ... See more...
I think what I am trying to do is relatively easy ? I want to query looking back -8 hours then count the # of events that are in a specific 4 hour window. index=anIndex sourcetype=aSourceType aString earliest=-481m latest=-1m | eval aTime2 = _time | eval A = if (aTime2 > relative_time(now(),"-241m@m") AND aTime2 < relative_time(now(),"-1m@m"),(A+1),A) | table A, aTime2 I would also want a count for the next sliding 4 hr window (-300m to -60m), there are few more but just trying to figure out the first one for now. I was expecting my variable "A" to show how many of my matched events occur within the first 4 hr period but its empty ? Am I going about this incorrectly, not seeding "A" with a 0 start value ? What am I missing ?  
I had a quick question about the resources on my indexer. I have a dev environment with a forwarder, indexer, and SH. On all of the servers, I have an IO Wait error. Investigating, I could turn that ... See more...
I had a quick question about the resources on my indexer. I have a dev environment with a forwarder, indexer, and SH. On all of the servers, I have an IO Wait error. Investigating, I could turn that alert off, or I could look at the actual resources available on the machine. Looking through it, it looks as if i may need more resources. Looks like i only have 2 cores? and about7 GB of ram.    Min Specs recommended by Splunk are: An x86 64-bit chip architecture. 12 physical CPU cores, or 24 vCPU at 2 GHz or greater per core. 12 GB RAM. This is what i have: Would this explain these errors:   System iowait reached red threshold of 3 Maximum per-cpu iowait reached red threshold of 10 Sum of 3 highest per-cpu iowaits reached red threshold of 15   Before I started trying to re do our Dev env from the ground up, we were receiving these errors and they haven't gone away.    Thanks for any help
Hi All, I'm working on a project to create some dashboards that display a lot of information and one of the questions that I'm facing is how to know if Nessus scans are credentialed, I looked at som... See more...
Hi All, I'm working on a project to create some dashboards that display a lot of information and one of the questions that I'm facing is how to know if Nessus scans are credentialed, I looked at some events, and it indicates the check type: local. Is this means the scan is credential ?  Also tried to look into the events to see if there are anything that indicated that the scan is authenticated. Thanks in advance for any information may help.
HFs process data transparently so there's no way to track the flow of events.  Many customers work around that by having the HF add a field to every event where the value of the field is the HF's name.
Are you using the F5 BIG-IP platform? If so, the Splunk Add-on for F5 BIG-IP seems like the right direction. https://splunkbase.splunk.com/app/2680 Documentation, including installation and data fo... See more...
Are you using the F5 BIG-IP platform? If so, the Splunk Add-on for F5 BIG-IP seems like the right direction. https://splunkbase.splunk.com/app/2680 Documentation, including installation and data forwarding instructions, can be found here: https://docs.splunk.com/Documentation/AddOns/released/F5BIGIP/About
Does it appear when you change the search results to the "visualization" tab,  then switch the visualization to "Line Chart"?   Alternatively could you try: <your search that extracts the fields> ... See more...
Does it appear when you change the search results to the "visualization" tab,  then switch the visualization to "Line Chart"?   Alternatively could you try: <your search that extracts the fields> | timechart mode(target) as target mode(state) as state mode(cavity) as cavity
There is a minimum basic instance specification for a production-grade Splunk Enterprise deployment in this page: https://docs.splunk.com/Documentation/Splunk/9.2.1/Capacity/Referencehardware E.g. ... See more...
There is a minimum basic instance specification for a production-grade Splunk Enterprise deployment in this page: https://docs.splunk.com/Documentation/Splunk/9.2.1/Capacity/Referencehardware E.g. An x86 64-bit chip architecture 12 physical CPU cores, or 24 vCPU at 2 GHz or greater speed per core. 12 GB RAM. A 1 Gb Ethernet NIC, optional second NIC for a management network. A 64-bit Linux or Windows distribution. See Supported Operating Systems in the Installation Manual. If you are just doing testing and can tolerate a lower performance, you can use lower specs. For estimating storage requirements, it would depend on how many days of retention you would keep for your <100MB/day, and how compressible your log data is. You could throw a couple tens of gigabytes into it and see how the disk space taken by data grows.
I have accomplished the Rex using field extractor but as for plotting the values this is not of much help, id like to plot the values found with the associated timestamp of the event into a line c... See more...
I have accomplished the Rex using field extractor but as for plotting the values this is not of much help, id like to plot the values found with the associated timestamp of the event into a line chart
Hello, Hope this message finds you all well. I have moved to the role of Splunk admin recently and I need to install Splunk enterprise package (single instance) for lab purpose. Further, splunk ent... See more...
Hello, Hope this message finds you all well. I have moved to the role of Splunk admin recently and I need to install Splunk enterprise package (single instance) for lab purpose. Further, splunk enterprise security and Splunk soar app will be installed on the same server as well. The lab is just for the demo and some RND purpose and the daily ingestion will be less than 100 mb.  I have the license and the Enterprise security package from my previous lab setup. Needed some suggestion with what vCPU, storage and RAM I should proceed with.   Thanks in advance
| rex "target: Temp\((?<target>\d+)\), state: Temp\((?<state>\d+)\), cavity: (?<cavity>\d+)"  
I would like to extract the Message, Timestamp, and serial fields Then I would like to plot the target: Temp(315600), state: Temp(315600), cavity: 178900  Each on individual plots based on the time... See more...
I would like to extract the Message, Timestamp, and serial fields Then I would like to plot the target: Temp(315600), state: Temp(315600), cavity: 178900  Each on individual plots based on the time series I take it I will have to use a rex command to extract the bolded values from the message field. How would I go about this? {"bootcount":10,"device_id":"71ff6686fa5347828e3668e59249d0be","environment":"prod_walker", "event_source":"appliance","event_type":"GENERIC","location": {"city":"","country":"XXX","latitude":XXX,"longitude":XXX,"state":""}, "log_level":"info","message":"hardware_controller: TestState { target: Temp(315600), state: Temp(315600), cavity: 178900, fuel: None, shutdown: None, errors: test() }", "model_number":"XXXX","sequence":1411,"serial":"XXXX","software_version":"2.2.2.7641","ticks":158236,"timestamp":1717972790}